issue
dict
pr
dict
pr_details
dict
{ "body": "It looks that the top level boosting based on the indices (_indices_boost_) requires that we specify the exact index name. We cannot substitue an alias pointing to a given index. This is very inconvenient since it defeats the purpose of having aliases to hide internal organization of the indices.\n", "comments": [ { "body": "+1 .. \n", "created_at": "2014-04-07T18:05:43Z" }, { "body": "I have the same problem\n", "created_at": "2014-09-18T09:38:40Z" }, { "body": "+1\n", "created_at": "2015-02-06T17:10:46Z" }, { "body": "+1, has this been addressed yet? i must be able to use the alias.\n", "created_at": "2015-08-18T17:39:11Z" }, { "body": "+1\n", "created_at": "2015-08-28T08:04:23Z" }, { "body": "Has this been addressed? Please don't tell me this has not been fixed. Just bumped into it in production.\n", "created_at": "2015-10-21T20:14:50Z" }, { "body": "+1 as we have just hit this surprising behavior in production as well.\n", "created_at": "2015-10-23T19:20:46Z" }, { "body": "Things to consider:\n- should wildcard expansion be supported as well? (probably yes)\n- what if an index belongs to more than one alias in indices_boost parameter? For example, `\"indices_boost\": {\"alias1\": 3,\"alias2\": 2}` and both aliases include `index1`. (sum boost? max? min?)\n", "created_at": "2015-11-12T07:24:49Z" }, { "body": "It seems the option to add indices_boost with the java client is not available. Is this true? I am using spring-data-elasticsearch and I cant seem to find a way to add indices_boots to my query. Thank you.\n", "created_at": "2015-11-26T03:11:35Z" }, { "body": "+1\n", "created_at": "2016-01-13T15:06:59Z" }, { "body": "I would like support for wildcard expansion as well.\n\n:+1: \n", "created_at": "2016-01-14T12:48:55Z" }, { "body": "+1\n", "created_at": "2016-04-26T11:45:32Z" }, { "body": "Any updates on these? @clintongormley ?\n", "created_at": "2016-05-03T15:01:56Z" }, { "body": "Calling all devs. @clintongormley @kimchy Is this for real?\n", "created_at": "2016-05-04T14:44:37Z" }, { "body": "+1\n", "created_at": "2016-10-19T15:55:47Z" }, { "body": "This change stalled because of the problem of figuring out what to do when an index is boosted more than once via an alias or wildcard. From https://github.com/elastic/elasticsearch/pull/8811#issuecomment-258675219 : \n\n@masaruh just reread this thread and i think the correct answer is here:\n\n> Make indices_boost take list of index name and boost pair. We may need to do this if we want to have full control. But I somewhat hesitate to do this because it's breaking change.\n\nThen the logic would be that we use the boost from the first time we see the index in the list, so eg:\n\n```\n[ \n { \"foo\" : 2 }, # alias foo points to bar & baz\n { \"bar\": 1.5 }, # this boost is ignored because we've already seen bar\n { \"*\": 1.2 } # bar and baz are ignored because already seen, but index xyz gets this boost\n]\n```\n\nThis could be implemented in a bwc way. In fact, the old syntax doesn't need to be removed. We could just add this new syntax as an expert way of controlling boosts.\n\nWhat do you think?\n", "created_at": "2016-11-06T11:37:57Z" }, { "body": "Ooh, true. that should work! Thanks @clintongormley.\nI'll see what I can do.\n", "created_at": "2016-11-07T05:00:47Z" } ], "number": 4756, "title": "indices_boost ignore aliasing" }
{ "body": "This change allows specifying alias/wildcard expression in indices_boost.\r\nAnd added another format for specifying indices_boost. It accepts array of index name and boost pair.\r\nIf an index is included in multiple aliases/wildcard expressions, the first match will be used.\r\n\r\nCloses #4756", "number": 21393, "review_comments": [ { "body": "can you save ib.v1() and ib.v2() to a variable so it is more readable what they hold?", "created_at": "2016-11-29T09:35:10Z" }, { "body": "why hardcoding lenient indices options? maybe use strict? or even better, why not applying the options from the search request so we are consistent with how indices are resolved in the url?", "created_at": "2016-11-29T09:36:20Z" }, { "body": "very small change, it's a getter that is probably not used (I assume most people just set the index boost), but it is a breaking change for the java api. Not a huge deal though, shall we add the breaking-java label to this PR?", "created_at": "2016-11-29T09:56:33Z" }, { "body": "I find it odd that we modify the original source builder with the resolved indices names and replace what we originally had. Would it be possible to transport the resolved indices differently? I think we should serialize this new info separately and carefully handle bw compatibility around that. ", "created_at": "2016-11-29T10:00:24Z" }, { "body": "see comment above, ideally this setter wouldn't be needed, as it's here only because we need it internally to replace the provided indices with their resolved names.", "created_at": "2016-11-29T10:01:14Z" }, { "body": "shall we also deprecate this old syntax in 5.x given that it doesn't support ordering, in favour of the new array syntax added below?", "created_at": "2016-11-29T10:04:38Z" }, { "body": "can we save here v1 and v2 to a variable for readability?", "created_at": "2016-11-29T10:05:52Z" }, { "body": "do we want to check for null values here just to make sure so we fail fast?", "created_at": "2016-11-29T10:06:51Z" }, { "body": "I added a comment above, shall we rather check for null values when they get set, so we are sure this will never happen and we don't need this assertion anymore?", "created_at": "2016-11-29T10:07:32Z" }, { "body": "thanks for adding these tests! do we want to add a few more with all the exception cases that you handled in the parser?", "created_at": "2016-11-29T10:08:32Z" }, { "body": "why do we set the search_type here and above?", "created_at": "2016-11-29T10:09:19Z" }, { "body": "you can add the aliases as part of the index creation, that may make the test a bit faster ;)", "created_at": "2016-11-29T10:09:46Z" }, { "body": "can we remove these two log lines?", "created_at": "2016-11-29T10:10:12Z" }, { "body": "do we need explain?", "created_at": "2016-11-29T10:10:21Z" }, { "body": "maybe call the concrete indices \"index1\" and \"index2\", otherwise one may think they are aliases :)", "created_at": "2016-11-29T10:10:59Z" }, { "body": "I think we can remove this", "created_at": "2016-11-29T10:11:15Z" }, { "body": "I see why you used lenient indices options, so that nothing breaks if non existing indices are specified. But maybe that should be configurable like I commented above.", "created_at": "2016-11-29T10:12:27Z" }, { "body": "given that you added yaml tests, I wonder if these integ tests are still needed. Maybe we can convert them all to yaml tests?", "created_at": "2016-11-29T10:14:12Z" }, { "body": "Right.\r\nI didn't like tuple since it's not very readable as you pointed out. So, I decided to create a new class.", "created_at": "2016-12-01T12:53:06Z" }, { "body": "Done.\r\nBut I'm wondering if it makes this change breaking. Specifying index that doesn't exist used to be no op. Now returns error.", "created_at": "2016-12-01T12:55:11Z" }, { "body": "Done.", "created_at": "2016-12-01T12:55:28Z" }, { "body": "Done.", "created_at": "2016-12-01T12:56:04Z" }, { "body": "This was removed.", "created_at": "2016-12-01T12:56:19Z" }, { "body": "Done.", "created_at": "2016-12-01T12:56:27Z" }, { "body": "Done.", "created_at": "2016-12-01T12:56:41Z" }, { "body": "Done.", "created_at": "2016-12-01T12:56:47Z" }, { "body": "Tuple isn't used any more.", "created_at": "2016-12-01T12:57:09Z" }, { "body": "Done.", "created_at": "2016-12-01T12:57:18Z" }, { "body": "This test was removed in favor of yaml tests.", "created_at": "2016-12-01T12:57:52Z" }, { "body": "This test was removed in favor of yaml tests.\r\n", "created_at": "2016-12-01T12:58:01Z" } ], "title": "Resolve index names in indices_boost" }
{ "commits": [ { "message": "Resolve index names in indices_boost\n\nThis change allows specifying alias/wildcard expression in indices_boost.\nAnd added another format for specifying indices_boost. It accepts array of index name and boost pair.\nIf an index is included in multiple aliases/wildcard expressions, the first match will be used.\nWith new format, old format is marked as deprecated.\n\nCloses #4756" } ], "files": [ { "diff": "@@ -50,6 +50,7 @@\n \n \n abstract class AbstractSearchAsyncAction<FirstResult extends SearchPhaseResult> extends AbstractAsyncAction {\n+ private static final float DEFAULT_INDEX_BOOST = 1.0f;\n \n protected final Logger logger;\n protected final SearchTransportService searchTransportService;\n@@ -66,16 +67,17 @@ abstract class AbstractSearchAsyncAction<FirstResult extends SearchPhaseResult>\n private final AtomicInteger totalOps = new AtomicInteger();\n protected final AtomicArray<FirstResult> firstResults;\n private final Map<String, AliasFilter> aliasFilter;\n+ private final Map<String, Float> concreteIndexBoosts;\n private final long clusterStateVersion;\n private volatile AtomicArray<ShardSearchFailure> shardFailures;\n private final Object shardFailuresMutex = new Object();\n protected volatile ScoreDoc[] sortedShardDocs;\n \n protected AbstractSearchAsyncAction(Logger logger, SearchTransportService searchTransportService,\n Function<String, DiscoveryNode> nodeIdToDiscoveryNode,\n- Map<String, AliasFilter> aliasFilter, Executor executor, SearchRequest request,\n- ActionListener<SearchResponse> listener, GroupShardsIterator shardsIts, long startTime,\n- long clusterStateVersion, SearchTask task) {\n+ Map<String, AliasFilter> aliasFilter, Map<String, Float> concreteIndexBoosts,\n+ Executor executor, SearchRequest request, ActionListener<SearchResponse> listener,\n+ GroupShardsIterator shardsIts, long startTime, long clusterStateVersion, SearchTask task) {\n super(startTime);\n this.logger = logger;\n this.searchTransportService = searchTransportService;\n@@ -91,6 +93,7 @@ protected AbstractSearchAsyncAction(Logger logger, SearchTransportService search\n expectedTotalOps = shardsIts.totalSizeWith1ForEmpty();\n firstResults = new AtomicArray<>(shardsIts.size());\n this.aliasFilter = aliasFilter;\n+ this.concreteIndexBoosts = concreteIndexBoosts;\n }\n \n public void start() {\n@@ -125,8 +128,10 @@ void performFirstPhase(final int shardIndex, final ShardIterator shardIt, final\n } else {\n AliasFilter filter = this.aliasFilter.get(shard.index().getUUID());\n assert filter != null;\n+\n+ float indexBoost = concreteIndexBoosts.getOrDefault(shard.index().getUUID(), DEFAULT_INDEX_BOOST);\n ShardSearchTransportRequest transportRequest = new ShardSearchTransportRequest(request, shardIt.shardId(), shardsIts.size(),\n- filter, startTime());\n+ filter, indexBoost, startTime());\n sendExecuteFirstPhase(node, transportRequest , new ActionListener<FirstResult>() {\n @Override\n public void onResponse(FirstResult result) {", "filename": "core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java", "status": "modified" }, { "diff": "@@ -47,10 +47,11 @@ class SearchDfsQueryAndFetchAsyncAction extends AbstractSearchAsyncAction<DfsSea\n private final SearchPhaseController searchPhaseController;\n SearchDfsQueryAndFetchAsyncAction(Logger logger, SearchTransportService searchTransportService,\n Function<String, DiscoveryNode> nodeIdToDiscoveryNode,\n- Map<String, AliasFilter> aliasFilter, SearchPhaseController searchPhaseController,\n- Executor executor, SearchRequest request, ActionListener<SearchResponse> listener,\n- GroupShardsIterator shardsIts, long startTime, long clusterStateVersion, SearchTask task) {\n- super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, executor,\n+ Map<String, AliasFilter> aliasFilter, Map<String, Float> concreteIndexBoosts,\n+ SearchPhaseController searchPhaseController, Executor executor, SearchRequest request,\n+ ActionListener<SearchResponse> listener, GroupShardsIterator shardsIts,\n+ long startTime, long clusterStateVersion, SearchTask task) {\n+ super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, concreteIndexBoosts, executor,\n request, listener, shardsIts, startTime, clusterStateVersion, task);\n this.searchPhaseController = searchPhaseController;\n queryFetchResults = new AtomicArray<>(firstResults.length());", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryAndFetchAsyncAction.java", "status": "modified" }, { "diff": "@@ -55,11 +55,11 @@ class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction<DfsSe\n \n SearchDfsQueryThenFetchAsyncAction(Logger logger, SearchTransportService searchTransportService,\n Function<String, DiscoveryNode> nodeIdToDiscoveryNode,\n- Map<String, AliasFilter> aliasFilter, SearchPhaseController searchPhaseController,\n- Executor executor, SearchRequest request, ActionListener<SearchResponse> listener,\n- GroupShardsIterator shardsIts, long startTime, long clusterStateVersion,\n- SearchTask task) {\n- super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, executor,\n+ Map<String, AliasFilter> aliasFilter, Map<String, Float> concreteIndexBoosts,\n+ SearchPhaseController searchPhaseController, Executor executor, SearchRequest request,\n+ ActionListener<SearchResponse> listener, GroupShardsIterator shardsIts, long startTime,\n+ long clusterStateVersion, SearchTask task) {\n+ super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, concreteIndexBoosts, executor,\n request, listener, shardsIts, startTime, clusterStateVersion, task);\n this.searchPhaseController = searchPhaseController;\n queryResults = new AtomicArray<>(firstResults.length());", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java", "status": "modified" }, { "diff": "@@ -40,12 +40,12 @@ class SearchQueryAndFetchAsyncAction extends AbstractSearchAsyncAction<QueryFetc\n \n SearchQueryAndFetchAsyncAction(Logger logger, SearchTransportService searchTransportService,\n Function<String, DiscoveryNode> nodeIdToDiscoveryNode,\n- Map<String, AliasFilter> aliasFilter,\n+ Map<String, AliasFilter> aliasFilter, Map<String, Float> concreteIndexBoosts,\n SearchPhaseController searchPhaseController, Executor executor,\n SearchRequest request, ActionListener<SearchResponse> listener,\n GroupShardsIterator shardsIts, long startTime, long clusterStateVersion,\n SearchTask task) {\n- super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, executor,\n+ super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, concreteIndexBoosts, executor,\n request, listener, shardsIts, startTime, clusterStateVersion, task);\n this.searchPhaseController = searchPhaseController;\n ", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchQueryAndFetchAsyncAction.java", "status": "modified" }, { "diff": "@@ -50,13 +50,13 @@ class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction<QuerySea\n private final SearchPhaseController searchPhaseController;\n \n SearchQueryThenFetchAsyncAction(Logger logger, SearchTransportService searchTransportService,\n- Function<String, DiscoveryNode> nodeIdToDiscoveryNode, Map<String,\n- AliasFilter> aliasFilter,\n+ Function<String, DiscoveryNode> nodeIdToDiscoveryNode,\n+ Map<String, AliasFilter> aliasFilter, Map<String, Float> concreteIndexBoosts,\n SearchPhaseController searchPhaseController, Executor executor,\n SearchRequest request, ActionListener<SearchResponse> listener,\n GroupShardsIterator shardsIts, long startTime, long clusterStateVersion,\n SearchTask task) {\n- super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, executor, request, listener,\n+ super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, concreteIndexBoosts, executor, request, listener,\n shardsIts, startTime, clusterStateVersion, task);\n this.searchPhaseController = searchPhaseController;\n fetchResults = new AtomicArray<>(firstResults.length());", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.search.SearchService;\n+import org.elasticsearch.search.builder.SearchSourceBuilder;\n import org.elasticsearch.search.internal.AliasFilter;\n import org.elasticsearch.tasks.Task;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -84,6 +85,29 @@ private Map<String, AliasFilter> buildPerIndexAliasFilter(SearchRequest request,\n return aliasFilterMap;\n }\n \n+ private Map<String, Float> resolveIndexBoosts(SearchRequest searchRequest, ClusterState clusterState) {\n+ if (searchRequest.source() == null) {\n+ return Collections.emptyMap();\n+ }\n+\n+ SearchSourceBuilder source = searchRequest.source();\n+ if (source.indexBoosts() == null) {\n+ return Collections.emptyMap();\n+ }\n+\n+ Map<String, Float> concreteIndexBoosts = new HashMap<>();\n+ for (SearchSourceBuilder.IndexBoost ib : source.indexBoosts()) {\n+ Index[] concreteIndices =\n+ indexNameExpressionResolver.concreteIndices(clusterState, searchRequest.indicesOptions(), ib.getIndex());\n+\n+ for (Index concreteIndex : concreteIndices) {\n+ concreteIndexBoosts.putIfAbsent(concreteIndex.getUUID(), ib.getBoost());\n+ }\n+ }\n+\n+ return Collections.unmodifiableMap(concreteIndexBoosts);\n+ }\n+\n @Override\n protected void doExecute(Task task, SearchRequest searchRequest, ActionListener<SearchResponse> listener) {\n // pure paranoia if time goes backwards we are at least positive\n@@ -107,6 +131,8 @@ protected void doExecute(Task task, SearchRequest searchRequest, ActionListener<\n searchRequest.preference());\n failIfOverShardCountLimit(clusterService, shardIterators.size());\n \n+ Map<String, Float> concreteIndexBoosts = resolveIndexBoosts(searchRequest, clusterState);\n+\n // optimize search type for cases where there is only one shard group to search on\n if (shardIterators.size() == 1) {\n // if we only have one group, then we always want Q_A_F, no need for DFS, and no need to do THEN since we hit one shard\n@@ -125,7 +151,7 @@ protected void doExecute(Task task, SearchRequest searchRequest, ActionListener<\n }\n \n searchAsyncAction((SearchTask)task, searchRequest, shardIterators, startTimeInMillis, clusterState,\n- Collections.unmodifiableMap(aliasFilter), listener).start();\n+ Collections.unmodifiableMap(aliasFilter), concreteIndexBoosts, listener).start();\n }\n \n @Override\n@@ -135,6 +161,7 @@ protected final void doExecute(SearchRequest searchRequest, ActionListener<Searc\n \n private AbstractSearchAsyncAction searchAsyncAction(SearchTask task, SearchRequest searchRequest, GroupShardsIterator shardIterators,\n long startTime, ClusterState state, Map<String, AliasFilter> aliasFilter,\n+ Map<String, Float> concreteIndexBoosts,\n ActionListener<SearchResponse> listener) {\n final Function<String, DiscoveryNode> nodesLookup = state.nodes()::get;\n final long clusterStateVersion = state.version();\n@@ -143,22 +170,22 @@ private AbstractSearchAsyncAction searchAsyncAction(SearchTask task, SearchReque\n switch(searchRequest.searchType()) {\n case DFS_QUERY_THEN_FETCH:\n searchAsyncAction = new SearchDfsQueryThenFetchAsyncAction(logger, searchTransportService, nodesLookup,\n- aliasFilter, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime,\n+ aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime,\n clusterStateVersion, task);\n break;\n case QUERY_THEN_FETCH:\n searchAsyncAction = new SearchQueryThenFetchAsyncAction(logger, searchTransportService, nodesLookup,\n- aliasFilter, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime,\n+ aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime,\n clusterStateVersion, task);\n break;\n case DFS_QUERY_AND_FETCH:\n searchAsyncAction = new SearchDfsQueryAndFetchAsyncAction(logger, searchTransportService, nodesLookup,\n- aliasFilter, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime,\n+ aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime,\n clusterStateVersion, task);\n break;\n case QUERY_AND_FETCH:\n searchAsyncAction = new SearchQueryAndFetchAsyncAction(logger, searchTransportService, nodesLookup,\n- aliasFilter, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime,\n+ aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime,\n clusterStateVersion, task);\n break;\n default:\n@@ -177,5 +204,4 @@ private void failIfOverShardCountLimit(ClusterService clusterService, int shardC\n + \"] to a greater value if you really want to query that many shards at the same time.\");\n }\n }\n-\n }", "filename": "core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java", "status": "modified" }, { "diff": "@@ -96,7 +96,7 @@ final class DefaultSearchContext extends SearchContext {\n private final DfsSearchResult dfsResult;\n private final QuerySearchResult queryResult;\n private final FetchSearchResult fetchResult;\n- private float queryBoost = 1.0f;\n+ private final float queryBoost;\n private TimeValue timeout;\n // terminate after count\n private int terminateAfter = DEFAULT_TERMINATE_AFTER;\n@@ -173,6 +173,7 @@ final class DefaultSearchContext extends SearchContext {\n this.timeout = timeout;\n queryShardContext = indexService.newQueryShardContext(request.shardId().id(), searcher.getIndexReader(), request::nowInMillis);\n queryShardContext.setTypes(request.types());\n+ queryBoost = request.indexBoost();\n }\n \n @Override\n@@ -352,12 +353,6 @@ public float queryBoost() {\n return queryBoost;\n }\n \n- @Override\n- public SearchContext queryBoost(float queryBoost) {\n- this.queryBoost = queryBoost;\n- return this;\n- }\n-\n @Override\n public long getOriginNanoTime() {\n return originNanoTime;", "filename": "core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search;\n \n-import com.carrotsearch.hppc.ObjectFloatHashMap;\n import org.apache.lucene.search.FieldDoc;\n import org.apache.lucene.search.TopDocs;\n import org.apache.lucene.util.IOUtils;\n@@ -679,13 +678,6 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc\n QueryShardContext queryShardContext = context.getQueryShardContext();\n context.from(source.from());\n context.size(source.size());\n- ObjectFloatHashMap<String> indexBoostMap = source.indexBoost();\n- if (indexBoostMap != null) {\n- Float indexBoost = indexBoostMap.get(context.shardTarget().index());\n- if (indexBoost != null) {\n- context.queryBoost(indexBoost);\n- }\n- }\n Map<String, InnerHitBuilder> innerHitBuilders = new HashMap<>();\n if (source.query() != null) {\n InnerHitBuilder.extractInnerHits(source.query(), innerHitBuilders);", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -19,16 +19,16 @@\n \n package org.elasticsearch.search.builder;\n \n-import com.carrotsearch.hppc.ObjectFloatHashMap;\n import org.elasticsearch.action.support.ToXContentToBytes;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -63,10 +63,6 @@\n import java.util.Collections;\n import java.util.List;\n import java.util.Objects;\n-import java.util.stream.Collectors;\n-import java.util.stream.StreamSupport;\n-\n-import static org.elasticsearch.common.collect.Tuple.tuple;\n \n /**\n * A search source builder allowing to easily build search source. Simple\n@@ -76,6 +72,8 @@\n * @see org.elasticsearch.action.search.SearchRequest#source(SearchSourceBuilder)\n */\n public final class SearchSourceBuilder extends ToXContentToBytes implements Writeable {\n+ private static final DeprecationLogger DEPRECATION_LOGGER =\n+ new DeprecationLogger(Loggers.getLogger(SearchSourceBuilder.class));\n \n public static final ParseField FROM_FIELD = new ParseField(\"from\");\n public static final ParseField SIZE_FIELD = new ParseField(\"size\");\n@@ -167,7 +165,7 @@ public static HighlightBuilder highlight() {\n \n private List<RescoreBuilder> rescoreBuilders;\n \n- private ObjectFloatHashMap<String> indexBoost = null;\n+ private List<IndexBoost> indexBoosts = new ArrayList<>();\n \n private List<String> stats;\n \n@@ -193,13 +191,7 @@ public SearchSourceBuilder(StreamInput in) throws IOException {\n storedFieldsContext = in.readOptionalWriteable(StoredFieldsContext::new);\n from = in.readVInt();\n highlightBuilder = in.readOptionalWriteable(HighlightBuilder::new);\n- int indexBoostSize = in.readVInt();\n- if (indexBoostSize > 0) {\n- indexBoost = new ObjectFloatHashMap<>(indexBoostSize);\n- for (int i = 0; i < indexBoostSize; i++) {\n- indexBoost.put(in.readString(), in.readFloat());\n- }\n- }\n+ indexBoosts = in.readList(IndexBoost::new);\n minScore = in.readOptionalFloat();\n postQueryBuilder = in.readOptionalNamedWriteable(QueryBuilder.class);\n queryBuilder = in.readOptionalNamedWriteable(QueryBuilder.class);\n@@ -240,11 +232,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeOptionalWriteable(storedFieldsContext);\n out.writeVInt(from);\n out.writeOptionalWriteable(highlightBuilder);\n- int indexBoostSize = indexBoost == null ? 0 : indexBoost.size();\n- out.writeVInt(indexBoostSize);\n- if (indexBoostSize > 0) {\n- writeIndexBoost(out);\n- }\n+ out.writeList(indexBoosts);\n out.writeOptionalFloat(minScore);\n out.writeOptionalNamedWriteable(postQueryBuilder);\n out.writeOptionalNamedWriteable(queryBuilder);\n@@ -283,17 +271,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeOptionalWriteable(sliceBuilder);\n }\n \n- private void writeIndexBoost(StreamOutput out) throws IOException {\n- List<Tuple<String, Float>> ibs = StreamSupport\n- .stream(indexBoost.spliterator(), false)\n- .map(i -> tuple(i.key, i.value)).sorted((o1, o2) -> o1.v1().compareTo(o2.v1()))\n- .collect(Collectors.toList());\n- for (Tuple<String, Float> ib : ibs) {\n- out.writeString(ib.v1());\n- out.writeFloat(ib.v2());\n- }\n- }\n-\n /**\n * Sets the search query for this request.\n *\n@@ -816,28 +793,26 @@ public List<ScriptField> scriptFields() {\n }\n \n /**\n- * Sets the boost a specific index will receive when the query is executed\n+ * Sets the boost a specific index or alias will receive when the query is executed\n * against it.\n *\n * @param index\n- * The index to apply the boost against\n+ * The index or alias to apply the boost against\n * @param indexBoost\n * The boost to apply to the index\n */\n public SearchSourceBuilder indexBoost(String index, float indexBoost) {\n- if (this.indexBoost == null) {\n- this.indexBoost = new ObjectFloatHashMap<>();\n- }\n- this.indexBoost.put(index, indexBoost);\n+ Objects.requireNonNull(index, \"index must not be null\");\n+ this.indexBoosts.add(new IndexBoost(index, indexBoost));\n return this;\n }\n \n /**\n- * Gets the boost a specific indices will receive when the query is\n+ * Gets the boost a specific indices or aliases will receive when the query is\n * executed against them.\n */\n- public ObjectFloatHashMap<String> indexBoost() {\n- return indexBoost;\n+ public List<IndexBoost> indexBoosts() {\n+ return indexBoosts;\n }\n \n /**\n@@ -916,7 +891,7 @@ private SearchSourceBuilder shallowCopy(QueryBuilder queryBuilder, QueryBuilder\n rewrittenBuilder.storedFieldsContext = storedFieldsContext;\n rewrittenBuilder.from = from;\n rewrittenBuilder.highlightBuilder = highlightBuilder;\n- rewrittenBuilder.indexBoost = indexBoost;\n+ rewrittenBuilder.indexBoosts = indexBoosts;\n rewrittenBuilder.minScore = minScore;\n rewrittenBuilder.postQueryBuilder = postQueryBuilder;\n rewrittenBuilder.profile = profile;\n@@ -1002,15 +977,16 @@ public void parseXContent(QueryParseContext context, AggregatorParsers aggParser\n scriptFields.add(new ScriptField(context));\n }\n } else if (context.getParseFieldMatcher().match(currentFieldName, INDICES_BOOST_FIELD)) {\n- indexBoost = new ObjectFloatHashMap<>();\n+ DEPRECATION_LOGGER.deprecated(\n+ \"Object format in indices_boost is deprecated, please use array format instead\");\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token.isValue()) {\n- indexBoost.put(currentFieldName, parser.floatValue());\n+ indexBoosts.add(new IndexBoost(currentFieldName, parser.floatValue()));\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"Unknown key for a \" + token +\n- \" in [\" + currentFieldName + \"].\", parser.getTokenLocation());\n+ \" in [\" + currentFieldName + \"].\", parser.getTokenLocation());\n }\n }\n } else if (context.getParseFieldMatcher().match(currentFieldName, AGGREGATIONS_FIELD)\n@@ -1059,9 +1035,13 @@ public void parseXContent(QueryParseContext context, AggregatorParsers aggParser\n docValueFields.add(parser.text());\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"Expected [\" + XContentParser.Token.VALUE_STRING +\n- \"] in [\" + currentFieldName + \"] but found [\" + token + \"]\", parser.getTokenLocation());\n+ \"] in [\" + currentFieldName + \"] but found [\" + token + \"]\", parser.getTokenLocation());\n }\n }\n+ } else if (context.getParseFieldMatcher().match(currentFieldName, INDICES_BOOST_FIELD)) {\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n+ indexBoosts.add(new IndexBoost(context));\n+ }\n } else if (context.getParseFieldMatcher().match(currentFieldName, SORT_FIELD)) {\n sorts = new ArrayList<>(SortBuilder.fromXContent(context));\n } else if (context.getParseFieldMatcher().match(currentFieldName, RESCORE_FIELD)) {\n@@ -1191,18 +1171,13 @@ public void innerToXContent(XContentBuilder builder, Params params) throws IOExc\n builder.field(SLICE.getPreferredName(), sliceBuilder);\n }\n \n- if (indexBoost != null) {\n- builder.startObject(INDICES_BOOST_FIELD.getPreferredName());\n- assert !indexBoost.containsKey(null);\n- final Object[] keys = indexBoost.keys;\n- final float[] values = indexBoost.values;\n- for (int i = 0; i < keys.length; i++) {\n- if (keys[i] != null) {\n- builder.field((String) keys[i], values[i]);\n- }\n- }\n+ builder.startArray(INDICES_BOOST_FIELD.getPreferredName());\n+ for (IndexBoost ib : indexBoosts) {\n+ builder.startObject();\n+ builder.field(ib.index, ib.boost);\n builder.endObject();\n }\n+ builder.endArray();\n \n if (aggregations != null) {\n builder.field(AGGREGATIONS_FIELD.getPreferredName(), aggregations);\n@@ -1237,6 +1212,91 @@ public void innerToXContent(XContentBuilder builder, Params params) throws IOExc\n }\n }\n \n+ public static class IndexBoost implements Writeable, ToXContent {\n+ private final String index;\n+ private final float boost;\n+\n+ IndexBoost(String index, float boost) {\n+ this.index = index;\n+ this.boost = boost;\n+ }\n+\n+ IndexBoost(StreamInput in) throws IOException {\n+ index = in.readString();\n+ boost = in.readFloat();\n+ }\n+\n+ IndexBoost(QueryParseContext context) throws IOException {\n+ XContentParser parser = context.parser();\n+ XContentParser.Token token = parser.currentToken();\n+\n+ if (token == XContentParser.Token.START_OBJECT) {\n+ token = parser.nextToken();\n+ if (token == XContentParser.Token.FIELD_NAME) {\n+ index = parser.currentName();\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(), \"Expected [\" + XContentParser.Token.FIELD_NAME +\n+ \"] in [\" + INDICES_BOOST_FIELD + \"] but found [\" + token + \"]\", parser.getTokenLocation());\n+ }\n+ token = parser.nextToken();\n+ if (token == XContentParser.Token.VALUE_NUMBER) {\n+ boost = parser.floatValue();\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(), \"Expected [\" + XContentParser.Token.VALUE_NUMBER +\n+ \"] in [\" + INDICES_BOOST_FIELD + \"] but found [\" + token + \"]\", parser.getTokenLocation());\n+ }\n+ token = parser.nextToken();\n+ if (token != XContentParser.Token.END_OBJECT) {\n+ throw new ParsingException(parser.getTokenLocation(), \"Expected [\" + XContentParser.Token.END_OBJECT +\n+ \"] in [\" + INDICES_BOOST_FIELD + \"] but found [\" + token + \"]\", parser.getTokenLocation());\n+ }\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(), \"Expected [\" + XContentParser.Token.START_OBJECT +\n+ \"] in [\" + parser.currentName() + \"] but found [\" + token + \"]\", parser.getTokenLocation());\n+ }\n+ }\n+\n+ public String getIndex() {\n+ return index;\n+ }\n+\n+ public float getBoost() {\n+ return boost;\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ out.writeString(index);\n+ out.writeFloat(boost);\n+ }\n+\n+ @Override\n+ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ builder.startObject();\n+ builder.field(index, boost);\n+ builder.endObject();\n+ return builder;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return Objects.hash(index, boost);\n+ }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ if (obj == null) {\n+ return false;\n+ }\n+ if (getClass() != obj.getClass()) {\n+ return false;\n+ }\n+ IndexBoost other = (IndexBoost) obj;\n+ return Objects.equals(index, other.index)\n+ && Objects.equals(boost, other.boost);\n+ }\n+\n+ }\n public static class ScriptField implements Writeable, ToXContent {\n \n private final boolean ignoreFailure;\n@@ -1352,8 +1412,9 @@ public boolean equals(Object obj) {\n @Override\n public int hashCode() {\n return Objects.hash(aggregations, explain, fetchSourceContext, docValueFields, storedFieldsContext, from, highlightBuilder,\n- indexBoost, minScore, postQueryBuilder, queryBuilder, rescoreBuilders, scriptFields, size, sorts, searchAfterBuilder,\n- sliceBuilder, stats, suggestBuilder, terminateAfter, timeout, trackScores, version, profile, extBuilders);\n+ indexBoosts, minScore, postQueryBuilder, queryBuilder, rescoreBuilders, scriptFields, size,\n+ sorts, searchAfterBuilder, sliceBuilder, stats, suggestBuilder, terminateAfter, timeout, trackScores, version,\n+ profile, extBuilders);\n }\n \n @Override\n@@ -1372,7 +1433,7 @@ public boolean equals(Object obj) {\n && Objects.equals(storedFieldsContext, other.storedFieldsContext)\n && Objects.equals(from, other.from)\n && Objects.equals(highlightBuilder, other.highlightBuilder)\n- && Objects.equals(indexBoost, other.indexBoost)\n+ && Objects.equals(indexBoosts, other.indexBoosts)\n && Objects.equals(minScore, other.minScore)\n && Objects.equals(postQueryBuilder, other.postQueryBuilder)\n && Objects.equals(queryBuilder, other.queryBuilder)", "filename": "core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java", "status": "modified" }, { "diff": "@@ -37,7 +37,6 @@\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.similarity.SimilarityService;\n-import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.SearchExtBuilder;\n import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.search.aggregations.SearchContextAggregations;\n@@ -144,11 +143,6 @@ public float queryBoost() {\n return in.queryBoost();\n }\n \n- @Override\n- public SearchContext queryBoost(float queryBoost) {\n- return in.queryBoost(queryBoost);\n- }\n-\n @Override\n public long getOriginNanoTime() {\n return in.getOriginNanoTime();", "filename": "core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java", "status": "modified" }, { "diff": "@@ -148,8 +148,6 @@ protected void alreadyClosed() {\n \n public abstract float queryBoost();\n \n- public abstract SearchContext queryBoost(float queryBoost);\n-\n public abstract long getOriginNanoTime();\n \n public abstract ScrollContext scrollContext();", "filename": "core/src/main/java/org/elasticsearch/search/internal/SearchContext.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.internal;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.common.Strings;\n@@ -34,6 +35,7 @@\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n \n import java.io.IOException;\n+import java.util.Optional;\n \n /**\n * Shard level search request that gets created and consumed on the local node.\n@@ -63,6 +65,7 @@ public class ShardSearchLocalRequest implements ShardSearchRequest {\n private Scroll scroll;\n private String[] types = Strings.EMPTY_ARRAY;\n private AliasFilter aliasFilter;\n+ private float indexBoost;\n private SearchSourceBuilder source;\n private Boolean requestCache;\n private long nowInMillis;\n@@ -73,9 +76,9 @@ public class ShardSearchLocalRequest implements ShardSearchRequest {\n }\n \n ShardSearchLocalRequest(SearchRequest searchRequest, ShardId shardId, int numberOfShards,\n- AliasFilter aliasFilter, long nowInMillis) {\n+ AliasFilter aliasFilter, float indexBoost, long nowInMillis) {\n this(shardId, numberOfShards, searchRequest.searchType(),\n- searchRequest.source(), searchRequest.types(), searchRequest.requestCache(), aliasFilter);\n+ searchRequest.source(), searchRequest.types(), searchRequest.requestCache(), aliasFilter, indexBoost);\n this.scroll = searchRequest.scroll();\n this.nowInMillis = nowInMillis;\n }\n@@ -85,17 +88,19 @@ public ShardSearchLocalRequest(ShardId shardId, String[] types, long nowInMillis\n this.nowInMillis = nowInMillis;\n this.aliasFilter = aliasFilter;\n this.shardId = shardId;\n+ indexBoost = 1.0f;\n }\n \n public ShardSearchLocalRequest(ShardId shardId, int numberOfShards, SearchType searchType, SearchSourceBuilder source, String[] types,\n- Boolean requestCache, AliasFilter aliasFilter) {\n+ Boolean requestCache, AliasFilter aliasFilter, float indexBoost) {\n this.shardId = shardId;\n this.numberOfShards = numberOfShards;\n this.searchType = searchType;\n this.source = source;\n this.types = types;\n this.requestCache = requestCache;\n this.aliasFilter = aliasFilter;\n+ this.indexBoost = indexBoost;\n }\n \n \n@@ -134,6 +139,11 @@ public QueryBuilder filteringAliases() {\n return aliasFilter.getQueryBuilder();\n }\n \n+ @Override\n+ public float indexBoost() {\n+ return indexBoost;\n+ }\n+\n @Override\n public long nowInMillis() {\n return nowInMillis;\n@@ -167,6 +177,20 @@ protected void innerReadFrom(StreamInput in) throws IOException {\n source = in.readOptionalWriteable(SearchSourceBuilder::new);\n types = in.readStringArray();\n aliasFilter = new AliasFilter(in);\n+ if (in.getVersion().onOrAfter(Version.V_5_2_0_UNRELEASED)) {\n+ indexBoost = in.readFloat();\n+ } else {\n+ // Nodes < 5.2.0 doesn't send index boost. Read it from source.\n+ if (source != null) {\n+ Optional<SearchSourceBuilder.IndexBoost> boost = source.indexBoosts()\n+ .stream()\n+ .filter(ib -> ib.getIndex().equals(shardId.getIndexName()))\n+ .findFirst();\n+ indexBoost = boost.isPresent() ? boost.get().getBoost() : 1.0f;\n+ } else {\n+ indexBoost = 1.0f;\n+ }\n+ }\n nowInMillis = in.readVLong();\n requestCache = in.readOptionalBoolean();\n }\n@@ -181,6 +205,9 @@ protected void innerWriteTo(StreamOutput out, boolean asKey) throws IOException\n out.writeOptionalWriteable(source);\n out.writeStringArray(types);\n aliasFilter.writeTo(out);\n+ if (out.getVersion().onOrAfter(Version.V_5_2_0_UNRELEASED)) {\n+ out.writeFloat(indexBoost);\n+ }\n if (!asKey) {\n out.writeVLong(nowInMillis);\n }", "filename": "core/src/main/java/org/elasticsearch/search/internal/ShardSearchLocalRequest.java", "status": "modified" }, { "diff": "@@ -62,6 +62,8 @@ public interface ShardSearchRequest {\n \n QueryBuilder filteringAliases();\n \n+ float indexBoost();\n+\n long nowInMillis();\n \n Boolean requestCache();", "filename": "core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java", "status": "modified" }, { "diff": "@@ -54,8 +54,8 @@ public ShardSearchTransportRequest(){\n }\n \n public ShardSearchTransportRequest(SearchRequest searchRequest, ShardId shardId, int numberOfShards,\n- AliasFilter aliasFilter, long nowInMillis) {\n- this.shardSearchLocalRequest = new ShardSearchLocalRequest(searchRequest, shardId, numberOfShards, aliasFilter, nowInMillis);\n+ AliasFilter aliasFilter, float indexBoost, long nowInMillis) {\n+ this.shardSearchLocalRequest = new ShardSearchLocalRequest(searchRequest, shardId, numberOfShards, aliasFilter, indexBoost, nowInMillis);\n this.originalIndices = new OriginalIndices(searchRequest);\n }\n \n@@ -111,6 +111,11 @@ public QueryBuilder filteringAliases() {\n return shardSearchLocalRequest.filteringAliases();\n }\n \n+ @Override\n+ public float indexBoost() {\n+ return shardSearchLocalRequest.indexBoost();\n+ }\n+\n @Override\n public long nowInMillis() {\n return shardSearchLocalRequest.nowInMillis();", "filename": "core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java", "status": "modified" }, { "diff": "@@ -22,14 +22,13 @@\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.query.ParsedQuery;\n-import org.elasticsearch.search.fetch.StoredFieldsContext;\n import org.elasticsearch.search.aggregations.SearchContextAggregations;\n import org.elasticsearch.search.fetch.FetchSearchResult;\n+import org.elasticsearch.search.fetch.StoredFieldsContext;\n import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext;\n import org.elasticsearch.search.fetch.subphase.highlight.SearchContextHighlight;\n-import org.elasticsearch.search.lookup.SearchLookup;\n import org.elasticsearch.search.query.QuerySearchResult;\n import org.elasticsearch.search.rescore.RescoreSearchContext;\n import org.elasticsearch.search.sort.SortAndFormats;\n@@ -85,11 +84,6 @@ public Query searchFilter(String[] types) {\n throw new UnsupportedOperationException(\"this context should be read only\");\n }\n \n- @Override\n- public SearchContext queryBoost(float queryBoost) {\n- throw new UnsupportedOperationException(\"Not supported\");\n- }\n-\n @Override\n public SearchContext scrollContext(ScrollContext scrollContext) {\n throw new UnsupportedOperationException(\"Not supported\");", "filename": "core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java", "status": "modified" }, { "diff": "@@ -88,7 +88,7 @@ public void sendFreeContext(DiscoveryNode node, long contextId, SearchRequest re\n lookup.put(primaryNode.getId(), primaryNode);\n Map<String, AliasFilter> aliasFilters = Collections.singletonMap(\"_na_\", new AliasFilter(null, Strings.EMPTY_ARRAY));\n AbstractSearchAsyncAction asyncAction = new AbstractSearchAsyncAction<TestSearchPhaseResult>(logger, transportService, lookup::get,\n- aliasFilters, null, request, responseListener, shardsIter, 0, 0, null) {\n+ aliasFilters, Collections.emptyMap(), null, request, responseListener, shardsIter, 0, 0, null) {\n TestSearchResponse response = new TestSearchResponse();\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/action/search/SearchAsyncActionTests.java", "status": "modified" }, { "diff": "@@ -87,6 +87,11 @@ public QueryBuilder filteringAliases() {\n return null;\n }\n \n+ @Override\n+ public float indexBoost() {\n+ return 1.0f;\n+ }\n+\n @Override\n public long nowInMillis() {\n return 0;", "filename": "core/src/test/java/org/elasticsearch/index/SearchSlowLogTests.java", "status": "modified" }, { "diff": "@@ -185,7 +185,7 @@ public void onFailure(Exception e) {\n try {\n QuerySearchResultProvider querySearchResultProvider = service.executeQueryPhase(\n new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT,\n- new SearchSourceBuilder(), new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY)),\n+ new SearchSourceBuilder(), new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f),\n new SearchTask(123L, \"\", \"\", \"\", null));\n IntArrayList intCursors = new IntArrayList(1);\n intCursors.add(0);\n@@ -220,7 +220,8 @@ public void testTimeout() throws IOException {\n new SearchSourceBuilder(),\n new String[0],\n false,\n- new AliasFilter(null, Strings.EMPTY_ARRAY)),\n+ new AliasFilter(null, Strings.EMPTY_ARRAY),\n+ 1.0f),\n null);\n // the search context should inherit the default timeout\n assertThat(contextWithDefaultTimeout.timeout(), equalTo(TimeValue.timeValueSeconds(5)));\n@@ -234,7 +235,8 @@ public void testTimeout() throws IOException {\n new SearchSourceBuilder().timeout(TimeValue.timeValueSeconds(seconds)),\n new String[0],\n false,\n- new AliasFilter(null, Strings.EMPTY_ARRAY)),\n+ new AliasFilter(null, Strings.EMPTY_ARRAY),\n+ 1.0f),\n null);\n // the search context should inherit the query timeout\n assertThat(context.timeout(), equalTo(TimeValue.timeValueSeconds(seconds)));", "filename": "core/src/test/java/org/elasticsearch/search/SearchServiceTests.java", "status": "modified" }, { "diff": "@@ -301,4 +301,78 @@ public void testEmptyQuery() throws IOException {\n String query = \"{ \\\"query\\\": {} }\";\n assertParseSearchSource(builder, new BytesArray(query), ParseFieldMatcher.EMPTY);\n }\n+\n+ public void testParseIndicesBoost() throws IOException {\n+ {\n+ String restContent = \" { \\\"indices_boost\\\": {\\\"foo\\\": 1.0, \\\"bar\\\": 2.0}}\";\n+ try (XContentParser parser = XContentFactory.xContent(restContent).createParser(restContent)) {\n+ SearchSourceBuilder searchSourceBuilder = SearchSourceBuilder.fromXContent(createParseContext(parser),\n+ searchRequestParsers.aggParsers, searchRequestParsers.suggesters, searchRequestParsers.searchExtParsers);\n+ assertEquals(2, searchSourceBuilder.indexBoosts().size());\n+ assertEquals(new SearchSourceBuilder.IndexBoost(\"foo\", 1.0f), searchSourceBuilder.indexBoosts().get(0));\n+ assertEquals(new SearchSourceBuilder.IndexBoost(\"bar\", 2.0f), searchSourceBuilder.indexBoosts().get(1));\n+ }\n+ }\n+\n+ {\n+ String restContent = \"{\" +\n+ \" \\\"indices_boost\\\" : [\\n\" +\n+ \" { \\\"foo\\\" : 1.0 },\\n\" +\n+ \" { \\\"bar\\\" : 2.0 },\\n\" +\n+ \" { \\\"baz\\\" : 3.0 }\\n\" +\n+ \" ]}\";\n+ try (XContentParser parser = XContentFactory.xContent(restContent).createParser(restContent)) {\n+ SearchSourceBuilder searchSourceBuilder = SearchSourceBuilder.fromXContent(createParseContext(parser),\n+ searchRequestParsers.aggParsers, searchRequestParsers.suggesters, searchRequestParsers.searchExtParsers);\n+ assertEquals(3, searchSourceBuilder.indexBoosts().size());\n+ assertEquals(new SearchSourceBuilder.IndexBoost(\"foo\", 1.0f), searchSourceBuilder.indexBoosts().get(0));\n+ assertEquals(new SearchSourceBuilder.IndexBoost(\"bar\", 2.0f), searchSourceBuilder.indexBoosts().get(1));\n+ assertEquals(new SearchSourceBuilder.IndexBoost(\"baz\", 3.0f), searchSourceBuilder.indexBoosts().get(2));\n+ }\n+ }\n+\n+ {\n+ String restContent = \"{\" +\n+ \" \\\"indices_boost\\\" : [\\n\" +\n+ \" { \\\"foo\\\" : 1.0, \\\"bar\\\": 2.0}\\n\" + // invalid format\n+ \" ]}\";\n+\n+ assertIndicesBoostParseErrorMessage(restContent, \"Expected [END_OBJECT] in [indices_boost] but found [FIELD_NAME]\");\n+ }\n+\n+ {\n+ String restContent = \"{\" +\n+ \" \\\"indices_boost\\\" : [\\n\" +\n+ \" {}\\n\" + // invalid format\n+ \" ]}\";\n+\n+ assertIndicesBoostParseErrorMessage(restContent, \"Expected [FIELD_NAME] in [indices_boost] but found [END_OBJECT]\");\n+ }\n+\n+ {\n+ String restContent = \"{\" +\n+ \" \\\"indices_boost\\\" : [\\n\" +\n+ \" { \\\"foo\\\" : \\\"bar\\\"}\\n\" + // invalid format\n+ \" ]}\";\n+\n+ assertIndicesBoostParseErrorMessage(restContent, \"Expected [VALUE_NUMBER] in [indices_boost] but found [VALUE_STRING]\");\n+ }\n+\n+ {\n+ String restContent = \"{\" +\n+ \" \\\"indices_boost\\\" : [\\n\" +\n+ \" { \\\"foo\\\" : {\\\"bar\\\": 1}}\\n\" + // invalid format\n+ \" ]}\";\n+\n+ assertIndicesBoostParseErrorMessage(restContent, \"Expected [VALUE_NUMBER] in [indices_boost] but found [START_OBJECT]\");\n+ }\n+ }\n+\n+ private void assertIndicesBoostParseErrorMessage(String restContent, String expectedErrorMessage) throws IOException {\n+ try (XContentParser parser = XContentFactory.xContent(restContent).createParser(restContent)) {\n+ ParsingException e = expectThrows(ParsingException.class, () -> SearchSourceBuilder.fromXContent(createParseContext(parser),\n+ searchRequestParsers.aggParsers, searchRequestParsers.suggesters, searchRequestParsers.searchExtParsers));\n+ assertEquals(expectedErrorMessage, e.getMessage());\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTests.java", "status": "modified" }, { "diff": "@@ -81,6 +81,7 @@ public void testSerialization() throws Exception {\n assertEquals(deserializedRequest.cacheKey(), shardSearchTransportRequest.cacheKey());\n assertNotSame(deserializedRequest, shardSearchTransportRequest);\n assertEquals(deserializedRequest.filteringAliases(), shardSearchTransportRequest.filteringAliases());\n+ assertEquals(deserializedRequest.indexBoost(), shardSearchTransportRequest.indexBoost(), 0.0f);\n }\n }\n }\n@@ -96,7 +97,7 @@ private ShardSearchTransportRequest createShardSearchTransportRequest() throws I\n filteringAliases = new AliasFilter(null, Strings.EMPTY_ARRAY);\n }\n return new ShardSearchTransportRequest(searchRequest, shardId,\n- randomIntBetween(1, 100), filteringAliases, Math.abs(randomLong()));\n+ randomIntBetween(1, 100), filteringAliases, randomBoolean() ? 1.0f : randomFloat(), Math.abs(randomLong()));\n }\n \n public void testFilteringAliases() throws Exception {\n@@ -213,4 +214,24 @@ public void testSerialize50Request() throws IOException {\n }\n }\n \n+ // BWC test for changes from #21393\n+ public void testSerialize50RequestForIndexBoost() throws IOException {\n+ BytesArray requestBytes = new BytesArray(Base64.getDecoder()\n+ // this is a base64 encoded request generated with the same input\n+ .decode(\"AAZpbmRleDEWTjEyM2trbHFUT21XZDY1Z2VDYlo5ZwABBAABAAIA/wD/////DwABBmluZGV4MUAAAAAAAAAAAP////8PAAAAAAAAAgAAAA\" +\n+ \"AAAPa/q8mOKwIAJg==\"));\n+\n+ try (StreamInput in = new NamedWriteableAwareStreamInput(requestBytes.streamInput(), namedWriteableRegistry)) {\n+ in.setVersion(Version.V_5_0_0);\n+ ShardSearchTransportRequest readRequest = new ShardSearchTransportRequest();\n+ readRequest.readFrom(in);\n+ assertEquals(0, in.available());\n+ assertEquals(2.0f, readRequest.indexBoost(), 0);\n+\n+ BytesStreamOutput output = new BytesStreamOutput();\n+ output.setVersion(Version.V_5_0_0);\n+ readRequest.writeTo(output);\n+ assertEquals(output.bytes().toBytesRef(), requestBytes.toBytesRef());\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java", "status": "modified" }, { "diff": "@@ -368,3 +368,18 @@ buildRestTests.setups['range_index'] = '''\n body: |\n {\"index\":{\"_id\": 1}}\n {\"expected_attendees\": {\"gte\": 10, \"lte\": 20}, \"time_frame\": {\"gte\": \"2015-10-31 12:00:00\", \"lte\": \"2015-11-01\"}}'''\n+\n+// Used by index boost doc\n+buildRestTests.setups['index_boost'] = '''\n+ - do:\n+ indices.create:\n+ index: index1\n+ - do:\n+ indices.create:\n+ index: index2\n+\n+ - do:\n+ indices.put_alias:\n+ index: index1\n+ name: alias1\n+'''", "filename": "docs/build.gradle", "status": "modified" }, { "diff": "@@ -6,6 +6,7 @@ across more than one indices. This is very handy when hits coming from\n one index matter more than hits coming from another index (think social\n graph where each user has an index).\n \n+deprecated[5.2.0, This format is deprecated. Please use array format instead.]\n [source,js]\n --------------------------------------------------\n GET /_search\n@@ -17,3 +18,23 @@ GET /_search\n }\n --------------------------------------------------\n // CONSOLE\n+// TEST[setup:index_boost warning:Object format in indices_boost is deprecated, please use array format instead]\n+\n+You can also specify it as an array to control the order of boosts.\n+\n+[source,js]\n+--------------------------------------------------\n+GET /_search\n+{\n+ \"indices_boost\" : [\n+ { \"alias1\" : 1.4 },\n+ { \"index*\" : 1.3 }\n+ ]\n+}\n+--------------------------------------------------\n+// CONSOLE\n+// TEST[continued]\n+\n+This is important when you use aliases or wildcard expression.\n+If multiple matches are found, the first match will be used.\n+For example, if an index is included in both `alias1` and `index*`, boost value of `1.4` is applied.\n\\ No newline at end of file", "filename": "docs/reference/search/request/index-boost.asciidoc", "status": "modified" }, { "diff": "@@ -0,0 +1,196 @@\n+setup:\n+ - do:\n+ indices.create:\n+ index: test_1\n+ - do:\n+ indices.create:\n+ index: test_2\n+\n+ - do:\n+ indices.put_alias:\n+ index: test_1\n+ name: alias_1\n+\n+ - do:\n+ indices.put_alias:\n+ index: test_2\n+ name: alias_2\n+\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 1\n+ body: { foo: bar }\n+\n+ - do:\n+ index:\n+ index: test_2\n+ type: test\n+ id: 1\n+ body: { foo: bar }\n+\n+ - do:\n+ indices.refresh:\n+ index: [test_1, test_2]\n+\n+---\n+\"Indices boost using object\":\n+ - skip:\n+ version: \" - 5.1.99\"\n+ reason: deprecation was added in 5.2.0\n+ features: \"warnings\"\n+\n+ - do:\n+ warnings:\n+ - 'Object format in indices_boost is deprecated, please use array format instead'\n+ search:\n+ index: _all\n+ body:\n+ indices_boost: {test_1: 2.0, test_2: 1.0}\n+\n+ - match: { hits.total: 2 }\n+ - match: { hits.hits.0._index: test_1 }\n+ - match: { hits.hits.1._index: test_2 }\n+\n+ - do:\n+ warnings:\n+ - 'Object format in indices_boost is deprecated, please use array format instead'\n+ search:\n+ index: _all\n+ body:\n+ indices_boost: {test_1: 1.0, test_2: 2.0}\n+\n+ - match: { hits.total: 2 }\n+ - match: { hits.hits.0._index: test_2 }\n+ - match: { hits.hits.1._index: test_1 }\n+\n+---\n+\"Indices boost using array\":\n+ - skip:\n+ version: \" - 5.1.99\"\n+ reason: array format was added in 5.2.0\n+\n+ - do:\n+ search:\n+ index: _all\n+ body:\n+ indices_boost: [{test_1: 2.0}, {test_2: 1.0}]\n+\n+ - match: { hits.total: 2 }\n+ - match: { hits.hits.0._index: test_1 }\n+ - match: { hits.hits.1._index: test_2 }\n+\n+ - do:\n+ search:\n+ index: _all\n+ body:\n+ indices_boost: [{test_1: 1.0}, {test_2: 2.0}]\n+\n+ - match: { hits.total: 2 }\n+ - match: { hits.hits.0._index: test_2 }\n+ - match: { hits.hits.1._index: test_1 }\n+\n+---\n+\"Indices boost using array with alias\":\n+ - skip:\n+ version: \" - 5.1.99\"\n+ reason: array format was added in 5.2.0\n+\n+ - do:\n+ search:\n+ index: _all\n+ body:\n+ indices_boost: [{alias_1: 2.0}]\n+\n+ - match: { hits.total: 2}\n+ - match: { hits.hits.0._index: test_1 }\n+ - match: { hits.hits.1._index: test_2 }\n+\n+ - do:\n+ search:\n+ index: _all\n+ body:\n+ indices_boost: [{alias_2: 2.0}]\n+\n+ - match: { hits.total: 2}\n+ - match: { hits.hits.0._index: test_2 }\n+ - match: { hits.hits.1._index: test_1 }\n+\n+---\n+\"Indices boost using array with wildcard\":\n+ - skip:\n+ version: \" - 5.1.99\"\n+ reason: array format was added in 5.2.0\n+\n+ - do:\n+ search:\n+ index: _all\n+ body:\n+ indices_boost: [{\"*_1\": 2.0}]\n+\n+ - match: { hits.total: 2}\n+ - match: { hits.hits.0._index: test_1 }\n+ - match: { hits.hits.1._index: test_2 }\n+\n+ - do:\n+ search:\n+ index: _all\n+ body:\n+ indices_boost: [{\"*_2\": 2.0}]\n+\n+ - match: { hits.total: 2}\n+ - match: { hits.hits.0._index: test_2 }\n+ - match: { hits.hits.1._index: test_1 }\n+\n+---\n+\"Indices boost using array multiple match\":\n+ - skip:\n+ version: \" - 5.1.99\"\n+ reason: array format was added in 5.2.0\n+\n+ - do:\n+ search:\n+ index: _all\n+ body:\n+ # First match (3.0) is used for test_1\n+ indices_boost: [{\"*_1\": 3.0}, {alias_1: 1.0}, {test_2: 2.0}]\n+\n+ - match: { hits.total: 2}\n+ - match: { hits.hits.0._index: test_1 }\n+ - match: { hits.hits.1._index: test_2 }\n+\n+ - do:\n+ search:\n+ index: _all\n+ body:\n+ # First match (1.0) is used for test_1\n+ indices_boost: [{\"*_1\": 1.0}, {test_2: 2.0}, {alias_1: 3.0}]\n+\n+ - match: { hits.total: 2}\n+ - match: { hits.hits.0._index: test_2 }\n+ - match: { hits.hits.1._index: test_1 }\n+\n+---\n+\"Indices boost for nonexistent index/alias\":\n+ - skip:\n+ version: \" - 5.1.99\"\n+ reason: array format was added in 5.2.0\n+\n+ - do:\n+ catch: /no such index/\n+ search:\n+ index: _all\n+ body:\n+ indices_boost: [{nonexistent: 2.0}, {test_1: 1.0}, {test_2: 2.0}]\n+\n+ - do:\n+ search:\n+ index: _all\n+ ignore_unavailable: true\n+ body:\n+ indices_boost: [{nonexistent: 2.0}, {test_1: 1.0}, {test_2: 2.0}]\n+\n+ - match: { hits.total: 2}\n+ - match: { hits.hits.0._index: test_2 }\n+ - match: { hits.hits.1._index: test_1 }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/40_indices_boost.yaml", "status": "added" }, { "diff": "@@ -156,11 +156,6 @@ public float queryBoost() {\n return 0;\n }\n \n- @Override\n- public SearchContext queryBoost(float queryBoost) {\n- return null;\n- }\n-\n @Override\n public long getOriginNanoTime() {\n return originNanoTime;", "filename": "test/framework/src/main/java/org/elasticsearch/test/TestSearchContext.java", "status": "modified" } ] }
{ "body": "On ubuntu 14.04 for Elasticsearch 1.7.0 debian package,\nIf we start twice elasticsearch very shortly, the pidofproc function doesn't return the PID of the first elasticsearch.\nSo 2 elasticsearch can be started at the same time. In my case this causes an OOM trigger. \n\nHow to reproduce:\n\n```\n# service elasticsearch stop\n# service elasticsearch start; service elasticsearch start; \n```\n", "comments": [ { "body": "Hi @rvrignaud \n\nAs a workaround, you can set this in your elasticsearch.yml:\n\n```\nnode.max_local_storage_nodes: 1\n```\n", "created_at": "2015-08-07T11:31:07Z" }, { "body": "Hey @clintongormley,\nI already have max_local_storage_nodes set to 1 but that doesn't prevent the second java process to start and to allocate memory (XMS and XMX setted to the same value).\n", "created_at": "2015-08-07T11:50:41Z" }, { "body": "With the rpm init script we place a lockfile before starting ES and remove it afterwards.\nIf someone would try a second startup it would find the lockfile and abort directly.\nIt should be a fairly easy fix to implement this in the ubuntu/debian init script as well.\nSince more things are moving to systemd this will not pose an issue later.\n", "created_at": "2015-08-27T13:38:57Z" }, { "body": "I think the problem here is these two lines of the init script:\n- [start-stop-daemon is given `-b` flag](https://github.com/elastic/elasticsearch/blob/master/distribution/deb/src/main/packaging/init.d/elasticsearch#L140)\n- [elasticsearch is given -d flag](https://github.com/elastic/elasticsearch/blob/master/distribution/deb/src/main/packaging/init.d/elasticsearch#L82)\n\nGiven the above, I think Elasticsearch is backgrounding itself (`elasticsearch -d`) and `start-stop-daemon` is also forking, so I _think_ start-stop-daemon is tracking the pid of the parent Elasticsearch process which immediately dies after Elasticsearch daemonizes.\n\nMy suggestion is that one of the two things are changed:\n- Don't give `-b` to start-stop-daemon\n- or, don't give `-d` to elasticsearch.\n", "created_at": "2016-06-09T04:55:13Z" }, { "body": "I think this is a dup of https://github.com/elastic/elasticsearch/issues/8796\n", "created_at": "2016-06-09T04:59:01Z" }, { "body": "We now default `node.max_local_storage_nodes` to 1, so closing this.\n", "created_at": "2016-09-27T15:33:38Z" }, { "body": "@dakrone as replied here https://github.com/elastic/elasticsearch/issues/12716#issuecomment-128681997 `node.max_local_storage_nodes` set to 1 does not fix the problem. IMHO this should be kept open.\n", "created_at": "2016-09-27T16:03:06Z" }, { "body": "@rvrignaud ahh I see, my apologies! I think this is related to the daemonization that we do and would have to be fixed there then, I'm re-opening this. Thanks for catching this!\n", "created_at": "2016-09-27T16:16:03Z" }, { "body": "I think this was actually fixed by https://github.com/elastic/elasticsearch/commit/5413efc570dfed0db26ede2cc0ef6ab737417748 since we no longer double daemonize.", "created_at": "2018-05-03T04:10:22Z" } ], "number": 12716, "title": "Elasticsearch ubuntu init script doesn't prevent to start 2 elasticsearch" }
{ "body": "On ubuntu 14.04, which uses upstart, where as our debian package uses\r\nsysvinit, there is no stdout/stderr message printed when starting up,\r\nbecause the start-stop-daemon swallows it.\r\n\r\nAs Elasticsearch is started to daemonize, we can remove the background\r\nflag from the start-stop-daemon and thus see, if the system does not have\r\nenough memory for starting up - something that happens often on VMs, since\r\nElasticsearch 5.0 uses 2gb by default instead of one.\r\n\r\nRelates #21300\r\nRelates #12716\r\n\r\nI'd appreciate few more hints of what could go wrong with this approach that I may be missing.", "number": 21343, "review_comments": [], "title": "Debian: configure start-stop-daemon to not go into background" }
{ "commits": [ { "message": "Packaging Deb: configure start-stop-daemon to not go into background\n\nOn ubuntu 14.04, which uses upstart, where as our debian package uses\nsysvinit, there is no stdout/stderr message printed when starting up,\nbecause the start-stop-daemon swallows it.\n\nAs Elasticsearch is started to daemonize, we can remove the background\nflag from the start-stop-daemon and thus see, if the system does not have\nenough memory for starting up - something that happens often on VMs, since\nElasticsearch 5.0 uses 2gb by default instead of one.\n\nRelates #21300\nRelates #12716" } ], "files": [ { "diff": "@@ -137,7 +137,7 @@ case \"$1\" in\n \tfi\n \n \t# Start Daemon\n-\tstart-stop-daemon -d $ES_HOME --start -b --user \"$ES_USER\" -c \"$ES_USER\" --pidfile \"$PID_FILE\" --exec $DAEMON -- $DAEMON_OPTS\n+\tstart-stop-daemon -d $ES_HOME --start --user \"$ES_USER\" -c \"$ES_USER\" --pidfile \"$PID_FILE\" --exec $DAEMON -- $DAEMON_OPTS\n \treturn=$?\n \tif [ $return -eq 0 ]; then\n \t\ti=0", "filename": "distribution/deb/src/main/packaging/init.d/elasticsearch", "status": "modified" } ] }
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**: \r\n5.0.0\r\n\r\n**Plugins installed**: \r\nnone\r\n\r\n**JVM version**: \r\nopenjdk version \"1.8.0_111\"\r\nOpenJDK Runtime Environment (build 1.8.0_111-b15)\r\nOpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)\r\n\r\n**OS version**:\r\n Operating System: CentOS Linux 7 (Core)\r\n CPE OS Name: cpe:/o:centos:centos:7\r\n Kernel: Linux 3.10.0-327.el7.x86_64\r\n Architecture: x86-64\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen I create a snapshot in any repository, then get all snapshots in the repository. I see two snapshots IN_PROGRESS! Then when the process finishes, I only see one SUCCESSFUL snapshot. The snapshots seem to work correctly, it's just that it shows duplicates while IN_PROGRESS. You can even see below, they have the same uuid: RHlVXKe-TKqB47nAoeh3Sw. Seems like this is probably not intended behavior!\r\n\r\n**Steps to reproduce**:\r\n \r\n1. (CREATE) $ curl -XPUT localhost:9200/_snapshot/brokerhistory/test -d'{\"indices\":\"brokerhistory_2016-11-04\"}'\r\n```json \r\n{\"accepted\":true}\r\n```\r\n \r\n2. (IN_PROGRESS) $ curl -XGET localhost:9200/_snapshot/brokerhistory/_all?pretty\r\n```json\r\n{\r\n \"snapshots\" : [\r\n {\r\n \"snapshot\" : \"test\",\r\n \"uuid\" : \"RHlVXKe-TKqB47nAoeh3Sw\",\r\n \"version_id\" : 5000099,\r\n \"version\" : \"5.0.0\",\r\n \"indices\" : [\r\n \"brokerhistory_2016-11-04\"\r\n ],\r\n \"state\" : \"IN_PROGRESS\",\r\n \"start_time\" : \"2016-11-04T13:24:42.755Z\",\r\n \"start_time_in_millis\" : 1478265882755,\r\n \"failures\" : [ ],\r\n \"shards\" : {\r\n \"total\" : 0,\r\n \"failed\" : 0,\r\n \"successful\" : 0\r\n }\r\n },\r\n {\r\n \"snapshot\" : \"test\",\r\n \"uuid\" : \"RHlVXKe-TKqB47nAoeh3Sw\",\r\n \"version_id\" : 5000099,\r\n \"version\" : \"5.0.0\",\r\n \"indices\" : [\r\n \"brokerhistory_2016-11-04\"\r\n ],\r\n \"state\" : \"IN_PROGRESS\",\r\n \"start_time\" : \"2016-11-04T13:24:42.755Z\",\r\n \"start_time_in_millis\" : 1478265882755,\r\n \"failures\" : [ ],\r\n \"shards\" : {\r\n \"total\" : 0,\r\n \"failed\" : 0,\r\n \"successful\" : 0\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\n3. (SUCCESS) $ curl -XGET localhost:9200/_snapshot/brokerhistory/_all?pretty\r\n```json\r\n{\r\n \"snapshots\" : [\r\n {\r\n \"snapshot\" : \"test\",\r\n \"uuid\" : \"RHlVXKe-TKqB47nAoeh3Sw\",\r\n \"version_id\" : 5000099,\r\n \"version\" : \"5.0.0\",\r\n \"indices\" : [\r\n \"brokerhistory_2016-11-04\"\r\n ],\r\n \"state\" : \"SUCCESS\",\r\n \"start_time\" : \"2016-11-04T13:24:42.755Z\",\r\n \"start_time_in_millis\" : 1478265882755,\r\n \"end_time\" : \"2016-11-04T13:25:31.027Z\",\r\n \"end_time_in_millis\" : 1478265931027,\r\n \"duration_in_millis\" : 48272,\r\n \"failures\" : [ ],\r\n \"shards\" : {\r\n \"total\" : 1,\r\n \"failed\" : 0,\r\n \"successful\" : 1\r\n }\r\n }\r\n ]\r\n}\r\n```", "comments": [ { "body": "Thank you for reporting @brino, this is indeed a bug and we will work on a fix.\n", "created_at": "2016-11-04T14:30:24Z" }, { "body": "Cool! Glad to help.\n", "created_at": "2016-11-04T14:34:57Z" }, { "body": "@brino note that snapshot/restore is working fine, it is just the json response returned when calling `_all`. Specifying `_current` works as expected.\n", "created_at": "2016-11-04T14:34:59Z" }, { "body": "Yes, the snapshot definitely seems to be working properly, just that response has a duplicate IN_PROGRESS.\n", "created_at": "2016-11-04T14:36:44Z" } ], "number": 21335, "title": "Duplicate IN_PROGRESS snapshots returned by GET /_snapshot/repo/_all in Elasticsearch 5.0" }
{ "body": "Before, when requesting to get the snapshots in a repository, if `_all` was\r\nspecified for the set of snapshots to get, any in-progress snapshots would\r\nbe returned twice. This commit fixes the issue ensuring that each snapshot,\r\nwhether in-progress or completed, is only returned once when making a call to\r\nget snapshots (GET /_snapshot/{repository_name}/_all).\r\n\r\nCloses #21335", "number": 21340, "review_comments": [ { "body": "revert ;)\n", "created_at": "2016-11-04T18:36:38Z" }, { "body": "guard this with `isAllSnapshots(request.snapshots()) == false`\n", "created_at": "2016-11-04T19:00:58Z" }, { "body": "guard this with `isCurrentSnapshots(request.snapshots()) == false`\n", "created_at": "2016-11-04T19:01:12Z" }, { "body": "if you set `_all` (i.e. `isAllSnapshots(request.snapshots()) == true`), you still want the current snapshots too.\n", "created_at": "2016-11-04T19:14:33Z" }, { "body": "done\n", "created_at": "2016-11-04T19:14:37Z" }, { "body": "Thinking about this some more, I like the idea of being able to specify `_current` in addition to other snapshots. I can imagine a user wanting to know the state of snapshots starting with \"2016-11-*\" and the \"current\" one. In this case, we would treat `_current` as a regex that resolves to whatever the currently running snapshots are. What do you think about that?\n", "created_at": "2016-11-04T19:29:03Z" }, { "body": "I pushed 14e12067cc1da984c6786e22c0f5892bbba7b41e, let me know what you think\n", "created_at": "2016-11-04T19:46:51Z" }, { "body": "instead of the assertBusy, maybe use a future above with setWaitForCompletion(true).\n", "created_at": "2016-11-07T18:16:16Z" }, { "body": "I pushed e00a2a16758da5e49faa3a6416762a3ab31c2551 to improve this\n", "created_at": "2016-11-07T18:29:18Z" } ], "title": "Fixes get snapshot duplicates when asking for _all" }
{ "commits": [ { "message": "Before, when requesting to get the snapshots in a repository, if `_all` was\nspecified for the set of snapshots to get, any in-progress snapshots would\nbe returned twice. This commit fixes the issue ensuring that each snapshot,\nwhether in-progress or completed, is only returned once when making a call to\nget snapshots (GET /_snapshot/{repository_name}/_all).\n\nCloses #21335" }, { "message": "allow _current to be used in addition to snapshot names/regexes, and don't\nfetch snapshots from disk if only _current is set" }, { "message": "revert delete empty line" }, { "message": "Uses get on the future instead of assertBusy" } ], "files": [ { "diff": "@@ -39,7 +39,7 @@\n \n import java.util.ArrayList;\n import java.util.HashMap;\n-import java.util.LinkedHashSet;\n+import java.util.HashSet;\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n@@ -80,25 +80,26 @@ protected void masterOperation(final GetSnapshotsRequest request, ClusterState s\n try {\n final String repository = request.repository();\n List<SnapshotInfo> snapshotInfoBuilder = new ArrayList<>();\n- if (isAllSnapshots(request.snapshots())) {\n- snapshotInfoBuilder.addAll(snapshotsService.currentSnapshots(repository));\n- snapshotInfoBuilder.addAll(snapshotsService.snapshots(repository,\n- snapshotsService.snapshotIds(repository),\n- request.ignoreUnavailable()));\n- } else if (isCurrentSnapshots(request.snapshots())) {\n- snapshotInfoBuilder.addAll(snapshotsService.currentSnapshots(repository));\n- } else {\n- final Map<String, SnapshotId> allSnapshotIds = new HashMap<>();\n- for (SnapshotInfo snapshotInfo : snapshotsService.currentSnapshots(repository)) {\n- SnapshotId snapshotId = snapshotInfo.snapshotId();\n- allSnapshotIds.put(snapshotId.getName(), snapshotId);\n- }\n+ final Map<String, SnapshotId> allSnapshotIds = new HashMap<>();\n+ final List<SnapshotId> currentSnapshotIds = new ArrayList<>();\n+ for (SnapshotInfo snapshotInfo : snapshotsService.currentSnapshots(repository)) {\n+ SnapshotId snapshotId = snapshotInfo.snapshotId();\n+ allSnapshotIds.put(snapshotId.getName(), snapshotId);\n+ currentSnapshotIds.add(snapshotId);\n+ }\n+ if (isCurrentSnapshotsOnly(request.snapshots()) == false) {\n for (SnapshotId snapshotId : snapshotsService.snapshotIds(repository)) {\n allSnapshotIds.put(snapshotId.getName(), snapshotId);\n }\n- final Set<SnapshotId> toResolve = new LinkedHashSet<>(); // maintain order\n+ }\n+ final Set<SnapshotId> toResolve = new HashSet<>();\n+ if (isAllSnapshots(request.snapshots())) {\n+ toResolve.addAll(allSnapshotIds.values());\n+ } else {\n for (String snapshotOrPattern : request.snapshots()) {\n- if (Regex.isSimpleMatchPattern(snapshotOrPattern) == false) {\n+ if (GetSnapshotsRequest.CURRENT_SNAPSHOT.equalsIgnoreCase(snapshotOrPattern)) {\n+ toResolve.addAll(currentSnapshotIds);\n+ } else if (Regex.isSimpleMatchPattern(snapshotOrPattern) == false) {\n if (allSnapshotIds.containsKey(snapshotOrPattern)) {\n toResolve.add(allSnapshotIds.get(snapshotOrPattern));\n } else if (request.ignoreUnavailable() == false) {\n@@ -113,12 +114,12 @@ protected void masterOperation(final GetSnapshotsRequest request, ClusterState s\n }\n }\n \n- if (toResolve.isEmpty() && request.ignoreUnavailable() == false) {\n+ if (toResolve.isEmpty() && request.ignoreUnavailable() == false && isCurrentSnapshotsOnly(request.snapshots()) == false) {\n throw new SnapshotMissingException(repository, request.snapshots()[0]);\n }\n-\n- snapshotInfoBuilder.addAll(snapshotsService.snapshots(repository, new ArrayList<>(toResolve), request.ignoreUnavailable()));\n }\n+\n+ snapshotInfoBuilder.addAll(snapshotsService.snapshots(repository, new ArrayList<>(toResolve), request.ignoreUnavailable()));\n listener.onResponse(new GetSnapshotsResponse(snapshotInfoBuilder));\n } catch (Exception e) {\n listener.onFailure(e);\n@@ -129,7 +130,7 @@ private boolean isAllSnapshots(String[] snapshots) {\n return (snapshots.length == 0) || (snapshots.length == 1 && GetSnapshotsRequest.ALL_SNAPSHOTS.equalsIgnoreCase(snapshots[0]));\n }\n \n- private boolean isCurrentSnapshots(String[] snapshots) {\n+ private boolean isCurrentSnapshotsOnly(String[] snapshots) {\n return (snapshots.length == 1 && GetSnapshotsRequest.CURRENT_SNAPSHOT.equalsIgnoreCase(snapshots[0]));\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -34,7 +34,6 @@\n import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotStatus;\n import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusResponse;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n-import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequest;\n import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptResponse;\n import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n@@ -58,6 +57,7 @@\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n@@ -77,7 +77,6 @@\n import org.elasticsearch.repositories.RepositoryData;\n import org.elasticsearch.repositories.RepositoryException;\n import org.elasticsearch.script.MockScriptEngine;\n-import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.script.StoredScriptsIT;\n import org.elasticsearch.snapshots.mockstore.MockRepository;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n@@ -101,7 +100,6 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.IndexSettings.INDEX_REFRESH_INTERVAL_SETTING;\n-import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAliasesExist;\n@@ -2505,8 +2503,28 @@ public void testGetSnapshotsRequest() throws Exception {\n }\n refresh();\n \n+ // make sure we return only the in-progress snapshot when taking the first snapshot on a clean repository\n+ // take initial snapshot with a block, making sure we only get 1 in-progress snapshot returned\n+ // block a node so the create snapshot operation can remain in progress\n+ final String initialBlockedNode = blockNodeWithIndex(repositoryName, indexName);\n+ ListenableActionFuture<CreateSnapshotResponse> responseListener =\n+ client.admin().cluster().prepareCreateSnapshot(repositoryName, \"snap-on-empty-repo\")\n+ .setWaitForCompletion(false)\n+ .setIndices(indexName)\n+ .execute();\n+ waitForBlock(initialBlockedNode, repositoryName, TimeValue.timeValueSeconds(60)); // wait for block to kick in\n+ getSnapshotsResponse = client.admin().cluster()\n+ .prepareGetSnapshots(\"test-repo\")\n+ .setSnapshots(randomFrom(\"_all\", \"_current\", \"snap-on-*\", \"*-on-empty-repo\", \"snap-on-empty-repo\"))\n+ .get();\n+ assertEquals(1, getSnapshotsResponse.getSnapshots().size());\n+ assertEquals(\"snap-on-empty-repo\", getSnapshotsResponse.getSnapshots().get(0).snapshotId().getName());\n+ unblockNode(repositoryName, initialBlockedNode); // unblock node\n+ responseListener.actionGet(TimeValue.timeValueMillis(10000L)); // timeout after 10 seconds\n+ client.admin().cluster().prepareDeleteSnapshot(repositoryName, \"snap-on-empty-repo\").get();\n+\n final int numSnapshots = randomIntBetween(1, 3) + 1;\n- logger.info(\"--> take {} snapshot(s)\", numSnapshots);\n+ logger.info(\"--> take {} snapshot(s)\", numSnapshots - 1);\n final String[] snapshotNames = new String[numSnapshots];\n for (int i = 0; i < numSnapshots - 1; i++) {\n final String snapshotName = randomAsciiOfLength(8).toLowerCase(Locale.ROOT);\n@@ -2538,9 +2556,19 @@ public void testGetSnapshotsRequest() throws Exception {\n \n logger.info(\"--> get all snapshots with a current in-progress\");\n // with ignore unavailable set to true, should not throw an exception\n+ final List<String> snapshotsToGet = new ArrayList<>();\n+ if (randomBoolean()) {\n+ // use _current plus the individual names of the finished snapshots\n+ snapshotsToGet.add(\"_current\");\n+ for (int i = 0; i < numSnapshots - 1; i++) {\n+ snapshotsToGet.add(snapshotNames[i]);\n+ }\n+ } else {\n+ snapshotsToGet.add(\"_all\");\n+ }\n getSnapshotsResponse = client.admin().cluster()\n .prepareGetSnapshots(repositoryName)\n- .addSnapshots(\"_all\")\n+ .setSnapshots(snapshotsToGet.toArray(Strings.EMPTY_ARRAY))\n .get();\n List<String> sortedNames = Arrays.asList(snapshotNames);\n Collections.sort(sortedNames);", "filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.0.0\n\n**Description of the problem including expected versus actual behavior**:\nWhen executing `_rollover?dry_run`, check for the existence of `new_index` and indicate if a rollover is expected to succeed or fail according to known conditions at execution time.\n\n**Steps to reproduce**:\n1. Setup an index for rollover\n2. Create an empty index having a name matching the next `new_index` after rollover\n3. Execute `_rollover?dry_run`.\n4. Execute `_rollover`.\n\n```\nGET packetbeat/_aliases\n{\n \"packetbeat-000004\": {\n \"aliases\": {\n \"packetbeat\": {}\n }\n }\n}\n```\n\n```\nPUT packetbeat-000005\n{\n \"acknowledged\": true,\n \"shards_acknowledged\": true\n}\n```\n\nNormally this is a 7 day rollover. Set it to 0 day (or remove the condition entirely) to just execute it:\n\n```\nPOST packetbeat/_rollover?dry_run\n{\n \"conditions\": {\n \"max_age\": \"0d\"\n }\n}\n\n{\n \"old_index\": \"packetbeat-000004\",\n \"new_index\": \"packetbeat-000005\",\n \"rolled_over\": false,\n \"dry_run\": true,\n \"acknowledged\": false,\n \"shards_acknowledged\": false,\n \"conditions\": {\n \"[max_age: 0s]\": true\n }\n}\n```\n\nI would have expected the dry run to indicate failure.\n\nThe actual rollover fails as expected:\n\n```\nPOST packetbeat/_rollover\n{\n \"conditions\": {\n \"max_age\": \"0d\"\n }\n}\n\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"index_already_exists_exception\",\n \"reason\": \"index [packetbeat-000005/0FDcEuIQTh6lVvD-l7lkxg] already exists\",\n \"index_uuid\": \"0FDcEuIQTh6lVvD-l7lkxg\",\n \"index\": \"packetbeat-000005\"\n }\n ],\n \"type\": \"index_already_exists_exception\",\n \"reason\": \"index [packetbeat-000005/0FDcEuIQTh6lVvD-l7lkxg] already exists\",\n \"index_uuid\": \"0FDcEuIQTh6lVvD-l7lkxg\",\n \"index\": \"packetbeat-000005\"\n },\n \"status\": 400\n}\n```\n", "comments": [ { "body": "I agree we should try to throw the same exception with `dry_run=true`\n", "created_at": "2016-10-27T15:05:09Z" } ], "number": 21149, "title": "Rollover API: indicate failure in dry run if an index already exists" }
{ "body": "Today we validate the target index name late and therefore don't fail for instance\r\nif the target index already exists and `dry_run=true` was specified. This change\r\nvalidates the index name before we early terminate if dry_run is set.\r\n\r\nCloses #21149", "number": 21330, "review_comments": [ { "body": "this also validates the index name. Is it worth adding a small test to verify that we don't accept index names that are not valid too?\n", "created_at": "2016-11-04T12:08:12Z" }, { "body": "I may not remember correctly but I think the Perl runner needs an empty new line here or it will fail...\n", "created_at": "2016-11-04T12:09:19Z" }, { "body": "we fail too but it passed and I added one... maybe I didn't do a git add... I will check\n", "created_at": "2016-11-04T12:11:08Z" }, { "body": "we unittest this I think we are good but I can add a test to the rest thing just for kicks\n", "created_at": "2016-11-04T12:11:52Z" } ], "title": "Validate the `_rollover` target index name early to also fail if dry_run=true" }
{ "commits": [ { "message": "Validate the `_rollover` target index name early to also fail if dry_run=true\n\nToday we validate the target index name late and therefore don't fail for instance\nif the target index already exists and `dry_run=true` was specified. This change\nvalidates the index name before we early terminate if dry_run is set.\n\nCloses #21149" }, { "message": "Add test that fails on an invalid index name" } ], "files": [ { "diff": "@@ -44,6 +44,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.shard.DocsStats;\n+import org.elasticsearch.indices.IndexAlreadyExistsException;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n@@ -109,16 +110,18 @@ protected void masterOperation(final RolloverRequest rolloverRequest, final Clus\n final String sourceProvidedName = indexMetaData.getSettings().get(IndexMetaData.SETTING_INDEX_PROVIDED_NAME,\n indexMetaData.getIndex().getName());\n final String sourceIndexName = indexMetaData.getIndex().getName();\n+ final String unresolvedName = (rolloverRequest.getNewIndexName() != null)\n+ ? rolloverRequest.getNewIndexName()\n+ : generateRolloverIndexName(sourceProvidedName, indexNameExpressionResolver);\n+ final String rolloverIndexName = indexNameExpressionResolver.resolveDateMathExpression(unresolvedName);\n+ MetaDataCreateIndexService.validateIndexName(rolloverIndexName, state); // will fail if the index already exists\n client.admin().indices().prepareStats(sourceIndexName).clear().setDocs(true).execute(\n new ActionListener<IndicesStatsResponse>() {\n @Override\n public void onResponse(IndicesStatsResponse statsResponse) {\n final Set<Condition.Result> conditionResults = evaluateConditions(rolloverRequest.getConditions(),\n statsResponse.getTotal().getDocs(), metaData.index(sourceIndexName));\n- final String unresolvedName = (rolloverRequest.getNewIndexName() != null)\n- ? rolloverRequest.getNewIndexName()\n- : generateRolloverIndexName(sourceProvidedName, indexNameExpressionResolver);\n- final String rolloverIndexName = indexNameExpressionResolver.resolveDateMathExpression(unresolvedName);\n+\n if (rolloverRequest.isDryRun()) {\n listener.onResponse(\n new RolloverResponse(sourceIndexName, rolloverIndexName, conditionResults, true, false, false, false));", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java", "status": "modified" }, { "diff": "@@ -22,6 +22,10 @@\n \"type\" : \"time\",\n \"description\" : \"Explicit operation timeout\"\n },\n+ \"dry_run\": {\n+ \"type\" : \"boolean\",\n+ \"description\" : \"If set to true the rollover action will only be validated but not actually performed even if a condition matches. The default is false\"\n+ },\n \"master_timeout\": {\n \"type\" : \"time\",\n \"description\" : \"Specify timeout for connection to master\"", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/indices.rollover.json", "status": "modified" }, { "diff": "@@ -102,3 +102,56 @@\n - match: { dry_run: false }\n - match: { conditions: { \"[max_docs: 1]\": false } }\n \n+---\n+\"Rollover with dry-run but target index exists\":\n+\n+ - skip:\n+ version: \" - 5.0.0\"\n+ reason: bug fixed in 5.0.1 - dry run was returning just fine even if the index exists\n+\n+ # create index with alias\n+ - do:\n+ indices.create:\n+ index: logs-1\n+ wait_for_active_shards: 1\n+ body:\n+ aliases:\n+ logs_index: {}\n+ logs_search: {}\n+\n+ - do:\n+ indices.create:\n+ index: logs-000002\n+\n+ - do:\n+ catch: /index_already_exists_exception/\n+ indices.rollover:\n+ dry_run: true\n+ alias: \"logs_search\"\n+ wait_for_active_shards: 1\n+ body:\n+ conditions:\n+ max_docs: 1\n+\n+ # also do it without dry_run\n+ - do:\n+ catch: /index_already_exists_exception/\n+ indices.rollover:\n+ dry_run: false\n+ alias: \"logs_search\"\n+ wait_for_active_shards: 1\n+ body:\n+ conditions:\n+ max_docs: 1\n+\n+ - do:\n+ catch: /invalid_index_name_exception/\n+ indices.rollover:\n+ new_index: invalid|index|name\n+ dry_run: true\n+ alias: \"logs_search\"\n+ wait_for_active_shards: 1\n+ body:\n+ conditions:\n+ max_docs: 1\n+", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.rollover/10_basic.yaml", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: \r\n5.0\r\n\r\n**Plugins installed**: \r\nWaiting on more info can backfill once collected.\r\n\r\n**JVM version**: \r\n1.8.x (waiting on confirmation)\r\n\r\n**OS version**:\r\nWaiting on more info\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nRunning a bool query with a range using \"now\" in the query throws an error due to the shard request cache refusing to cache it:\r\n\r\neg the range portion of a sample query:\r\n\r\n```\r\n {\r\n \"range\": {\r\n \"sentAt\": {\r\n \"from\": \"now-2M\",\r\n \"to\": \"now-1M\",\r\n \"include_lower\": true,\r\n \"include_upper\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n }\r\n```\r\nA full example query is here:\r\nhttps://gist.github.com/geekpete/9b9d61dea18a86b47ce6d85d3ed6b839\r\n\r\n**Steps to reproduce**:\r\n 1. Use elasticsearch 5.0\r\n 2. Run a range query (in this case was a bool query that used range\r\n 3. Receive a RemoteTransportException with \"features that prevent cachability are disabled on this context\":\r\n\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\nRemoteTransportException[[ZRykcOY][XXXX:xxxx][indices:data/read/search[phase/fetch/id]]]; nested: FetchPhaseExecutionException[Fetch Failed [Failed to highlight field [message]]]; nested: ElasticsearchParseException[could not read the current timestamp]; nested: IllegalArgumentException[features that prevent cachability are disabled on this context]; \r\n```\r\n\r\nThe new version of Elasticsearch has the Shard Request Cache on by default and it's suggest that to avoid this error when using \"now\" in queries that the cache be disabled per-request, index-wide or cluster-wide:\r\nhttps://www.elastic.co/guide/en/elasticsearch/reference/5.0/shard-request-cache.html\r\n\r\nAn alternative to this behaviour might be to automatically not cache the requests that cannot be cached without throwing an error. Perhaps provide detail in the logs at debug level (or whatever level is suitable) whenever this occurs per request \"features detected that prevent cachability\" so that users can still debug when this is occurring such as when performance isn't good and the user expects that the requests are being cached.\r\n\r\n\r\n", "comments": [ { "body": "This failure is not intentional. Rather, it indicates that there is a bug since something is consuming the current timestamp after Elasticsearch made a decision about the cachability of the request. The goal is indeed to simply not cache requests that make use of the current timestamp.\n", "created_at": "2016-11-03T08:15:03Z" }, { "body": "we added this safety to make sure we don't consume non-deterministic sources after we cache the query. now this is using stuff in the fetch phase which is ok but we still trip the safety. I will work on a fix\n", "created_at": "2016-11-03T08:22:26Z" }, { "body": "I tried to reproduce this with highlighting but I can't. Is it possible to get some stacktrace?\n", "created_at": "2016-11-03T09:44:18Z" }, { "body": "``` DIFF\ndiff --git a/core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java b/core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java\nindex ac6bc9a..ed63ea1 100644\n--- a/core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java\n+++ b/core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java\n@@ -2974,12 +2974,12 @@ public class HighlighterSearchIT extends ESIntegTestCase {\n .preTags(\"<x>\")\n .postTags(\"</x>\")\n ).setQuery(QueryBuilders.boolQuery().must(\n- QueryBuilders.rangeQuery(\"d\").gte(\"now-7d/d\").lte(\"now\").includeLower(true).includeUpper(true).boost(1.0f))\n+ QueryBuilders.rangeQuery(\"d\").gte(\"now-12h\").lte(\"now\").includeLower(true).includeUpper(true).boost(1.0f))\n .should(QueryBuilders.termQuery(\"field\", \"hello\")))\n .get();\n\n assertSearchResponse(r1);\n- assertThat(r1.getHits().getTotalHits(), equalTo(3L));\n+ assertThat(r1.getHits().getTotalHits(), equalTo(1L));\n assertHighlight(r1, 0, \"field\", 0, 1,\n equalTo(\"<x>hello</x> world\"));\n }\n```\n\nwith this patch I can reproduce it.. working on a fix\n", "created_at": "2016-11-04T06:31:31Z" } ], "number": 21295, "title": "Strict shard request cache in 5.0 should automatically not cache non-deterministic queries rather than throw an error" }
{ "body": "Today we still have a leftover from older percolators where Lucene\r\nquery instances where created ahead of time and rewritten later.\r\nThis `LateParsingQuery` was resolving `now()` when it's really used which we\r\ndon't need anymore. As a side-effect this failed to execute some highlighting\r\nqueries when they get rewritten since at that point `now` access it not permitted\r\nanymore to prevent bugs when queries get cached.\r\n\r\nCloses #21295", "number": 21328, "review_comments": [ { "body": "for my education, why are the rewrite calls gone here? or maybe the right question is why we were calling rewrite before?\n", "created_at": "2016-11-04T11:49:08Z" }, { "body": "why did we have to change the range for this test?\n", "created_at": "2016-11-04T11:49:50Z" }, { "body": "well today if you rewrite you get what used to be `rewrite(null).rewrite(null)` since we had an indirection. it's gone so if you really rewrite and you have a termrange you get a MTQ which is not expected\n", "created_at": "2016-11-04T11:50:35Z" }, { "body": "well that test was added to reproduce this but I couldn't but the range as not intersecting with the index and therefore it never got to the critical code\n", "created_at": "2016-11-04T11:51:18Z" }, { "body": "ah right cause the LateParsingQuery is gone, makes sense thanks!\n", "created_at": "2016-11-04T11:51:46Z" }, { "body": "sounds good\n", "created_at": "2016-11-04T11:52:47Z" } ], "title": "Remove LateParsingQuery to prevent timestamp access after context is frozen" }
{ "commits": [ { "message": "Remove LateParsingQuery to prevent timestamp access after context is frozen\n\nToday we still have a leftover from older percolators where lucene\nquery instances where created ahead of time and rewritten later.\nThis `LateParsingQuery` was resolving `now()` when it's really used which we\ndon't need anymore. As a sideeffect this failed to execute some highlighting\nqueries when they get rewritten since at that point `now` access it not permitted\nanymore to prevent bugs when queries get cached.\n\nCloses #21295" }, { "message": "fix RangeQueryBuilderTests and remove late parsing from LegacyDateFieldMapper too" }, { "message": "fix line length and remove RangeQueryBuilderTests from checkstyle suppression" }, { "message": "don't call rewrite on LegacyNumericRangeQuery - LateParsingQuery is gone" } ], "files": [ { "diff": "@@ -794,7 +794,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]MoreLikeThisQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]MultiMatchQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]RandomQueryBuilder.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]RangeQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]SpanMultiTermQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]SpanNotQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]support[/\\\\]QueryInnerHitsTests.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -161,71 +161,6 @@ public Mapper.Builder<?,?> parse(String name, Map<String, Object> node, ParserCo\n }\n \n public static final class DateFieldType extends MappedFieldType {\n-\n- final class LateParsingQuery extends Query {\n-\n- final Object lowerTerm;\n- final Object upperTerm;\n- final boolean includeLower;\n- final boolean includeUpper;\n- final DateTimeZone timeZone;\n- final DateMathParser forcedDateParser;\n- private QueryShardContext queryShardContext;\n-\n- public LateParsingQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n- DateTimeZone timeZone, DateMathParser forcedDateParser, QueryShardContext queryShardContext) {\n- this.lowerTerm = lowerTerm;\n- this.upperTerm = upperTerm;\n- this.includeLower = includeLower;\n- this.includeUpper = includeUpper;\n- this.timeZone = timeZone;\n- this.forcedDateParser = forcedDateParser;\n- this.queryShardContext = queryShardContext;\n- }\n-\n- @Override\n- public Query rewrite(IndexReader reader) throws IOException {\n- Query rewritten = super.rewrite(reader);\n- if (rewritten != this) {\n- return rewritten;\n- }\n- return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, queryShardContext);\n- }\n-\n- // Even though we only cache rewritten queries it is good to let all queries implement hashCode() and equals():\n- @Override\n- public boolean equals(Object o) {\n- if (this == o) return true;\n- if (sameClassAs(o) == false) return false;\n-\n- LateParsingQuery that = (LateParsingQuery) o;\n- if (includeLower != that.includeLower) return false;\n- if (includeUpper != that.includeUpper) return false;\n- if (lowerTerm != null ? !lowerTerm.equals(that.lowerTerm) : that.lowerTerm != null) return false;\n- if (upperTerm != null ? !upperTerm.equals(that.upperTerm) : that.upperTerm != null) return false;\n- if (timeZone != null ? !timeZone.equals(that.timeZone) : that.timeZone != null) return false;\n-\n- return true;\n- }\n-\n- @Override\n- public int hashCode() {\n- return Objects.hash(classHash(), lowerTerm, upperTerm, includeLower, includeUpper, timeZone);\n- }\n-\n- @Override\n- public String toString(String s) {\n- final StringBuilder sb = new StringBuilder();\n- return sb.append(name()).append(':')\n- .append(includeLower ? '[' : '{')\n- .append((lowerTerm == null) ? \"*\" : lowerTerm.toString())\n- .append(\" TO \")\n- .append((upperTerm == null) ? \"*\" : upperTerm.toString())\n- .append(includeUpper ? ']' : '}')\n- .toString();\n- }\n- }\n-\n protected FormatDateTimeFormatter dateTimeFormatter;\n protected DateMathParser dateMathParser;\n \n@@ -317,7 +252,7 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower\n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryShardContext context) {\n failIfNotIndexed();\n- return new LateParsingQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, context);\n+ return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, context);\n }\n \n Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -176,70 +176,6 @@ public static class TypeParser implements Mapper.TypeParser {\n \n public static class DateFieldType extends NumberFieldType {\n \n- final class LateParsingQuery extends Query {\n-\n- final Object lowerTerm;\n- final Object upperTerm;\n- final boolean includeLower;\n- final boolean includeUpper;\n- final DateTimeZone timeZone;\n- final DateMathParser forcedDateParser;\n- private QueryShardContext context;\n-\n- public LateParsingQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, DateTimeZone timeZone,\n- DateMathParser forcedDateParser, QueryShardContext context) {\n- this.lowerTerm = lowerTerm;\n- this.upperTerm = upperTerm;\n- this.includeLower = includeLower;\n- this.includeUpper = includeUpper;\n- this.timeZone = timeZone;\n- this.forcedDateParser = forcedDateParser;\n- this.context = context;\n- }\n-\n- @Override\n- public Query rewrite(IndexReader reader) throws IOException {\n- Query rewritten = super.rewrite(reader);\n- if (rewritten != this) {\n- return rewritten;\n- }\n- return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, context);\n- }\n-\n- // Even though we only cache rewritten queries it is good to let all queries implement hashCode() and equals():\n- @Override\n- public boolean equals(Object o) {\n- if (this == o) return true;\n- if (sameClassAs(o) == false) return false;\n-\n- LateParsingQuery that = (LateParsingQuery) o;\n- if (includeLower != that.includeLower) return false;\n- if (includeUpper != that.includeUpper) return false;\n- if (lowerTerm != null ? !lowerTerm.equals(that.lowerTerm) : that.lowerTerm != null) return false;\n- if (upperTerm != null ? !upperTerm.equals(that.upperTerm) : that.upperTerm != null) return false;\n- if (timeZone != null ? !timeZone.equals(that.timeZone) : that.timeZone != null) return false;\n-\n- return true;\n- }\n-\n- @Override\n- public int hashCode() {\n- return Objects.hash(classHash(), lowerTerm, upperTerm, includeLower, includeUpper, timeZone);\n- }\n-\n- @Override\n- public String toString(String s) {\n- final StringBuilder sb = new StringBuilder();\n- return sb.append(name()).append(':')\n- .append(includeLower ? '[' : '{')\n- .append((lowerTerm == null) ? \"*\" : lowerTerm.toString())\n- .append(\" TO \")\n- .append((upperTerm == null) ? \"*\" : upperTerm.toString())\n- .append(includeUpper ? ']' : '}')\n- .toString();\n- }\n- }\n-\n protected FormatDateTimeFormatter dateTimeFormatter = Defaults.DATE_TIME_FORMATTER;\n protected TimeUnit timeUnit = Defaults.TIME_UNIT;\n protected DateMathParser dateMathParser = new DateMathParser(dateTimeFormatter);\n@@ -371,7 +307,7 @@ public FieldStats.Date stats(IndexReader reader) throws IOException {\n \n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryShardContext context) {\n- return new LateParsingQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, context);\n+ return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, context);\n }\n \n private Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,", "filename": "core/src/main/java/org/elasticsearch/index/mapper/LegacyDateFieldMapper.java", "status": "modified" }, { "diff": "@@ -260,6 +260,10 @@ public String timeZone() {\n return this.timeZone == null ? null : this.timeZone.getID();\n }\n \n+ DateTimeZone getDateTimeZone() { // for testing\n+ return timeZone;\n+ }\n+\n /**\n * In case of format field, we can parse the from/to fields using this time format\n */\n@@ -278,6 +282,13 @@ public String format() {\n return this.format == null ? null : this.format.format();\n }\n \n+ DateMathParser getForceDateParser() { // pkg private for testing\n+ if (this.format != null) {\n+ return new DateMathParser(this.format);\n+ }\n+ return null;\n+ }\n+\n @Override\n protected void doXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject(NAME);\n@@ -440,19 +451,13 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n MappedFieldType mapper = context.fieldMapper(this.fieldName);\n if (mapper != null) {\n if (mapper instanceof LegacyDateFieldMapper.DateFieldType) {\n- DateMathParser forcedDateParser = null;\n- if (this.format != null) {\n- forcedDateParser = new DateMathParser(this.format);\n- }\n+\n query = ((LegacyDateFieldMapper.DateFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper,\n- timeZone, forcedDateParser, context);\n+ timeZone, getForceDateParser(), context);\n } else if (mapper instanceof DateFieldMapper.DateFieldType) {\n- DateMathParser forcedDateParser = null;\n- if (this.format != null) {\n- forcedDateParser = new DateMathParser(this.format);\n- }\n+\n query = ((DateFieldMapper.DateFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper,\n- timeZone, forcedDateParser, context);\n+ timeZone, getForceDateParser(), context);\n } else {\n if (timeZone != null) {\n throw new QueryShardException(context, \"[range] time_zone can not be applied to non date field [\"", "filename": "core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java", "status": "modified" }, { "diff": "@@ -256,7 +256,7 @@ public void testHourFormat() throws Exception {\n assertThat(((LegacyLongFieldMapper.CustomLongNumericField) doc.rootDoc().getField(\"date_field\")).numericAsString(), equalTo(Long.toString(new DateTime(TimeValue.timeValueHours(10).millis(), DateTimeZone.UTC).getMillis())));\n \n LegacyNumericRangeQuery<Long> rangeQuery = (LegacyNumericRangeQuery<Long>) defaultMapper.mappers().smartNameFieldMapper(\"date_field\").fieldType()\n- .rangeQuery(\"10:00:00\", \"11:00:00\", true, true, context).rewrite(null);\n+ .rangeQuery(\"10:00:00\", \"11:00:00\", true, true, context);\n assertThat(rangeQuery.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(11).millis(), DateTimeZone.UTC).getMillis() + 999));\n assertThat(rangeQuery.getMin(), equalTo(new DateTime(TimeValue.timeValueHours(10).millis(), DateTimeZone.UTC).getMillis()));\n }\n@@ -283,7 +283,7 @@ public void testDayWithoutYearFormat() throws Exception {\n assertThat(((LegacyLongFieldMapper.CustomLongNumericField) doc.rootDoc().getField(\"date_field\")).numericAsString(), equalTo(Long.toString(new DateTime(TimeValue.timeValueHours(34).millis(), DateTimeZone.UTC).getMillis())));\n \n LegacyNumericRangeQuery<Long> rangeQuery = (LegacyNumericRangeQuery<Long>) defaultMapper.mappers().smartNameFieldMapper(\"date_field\").fieldType()\n- .rangeQuery(\"Jan 02 10:00:00\", \"Jan 02 11:00:00\", true, true, context).rewrite(null);\n+ .rangeQuery(\"Jan 02 10:00:00\", \"Jan 02 11:00:00\", true, true, context);\n assertThat(rangeQuery.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(35).millis() + 999, DateTimeZone.UTC).getMillis()));\n assertThat(rangeQuery.getMin(), equalTo(new DateTime(TimeValue.timeValueHours(34).millis(), DateTimeZone.UTC).getMillis()));\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/LegacyDateFieldMapperTests.java", "status": "modified" }, { "diff": "@@ -29,8 +29,11 @@\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.lucene.BytesRefs;\n+import org.elasticsearch.index.mapper.DateFieldMapper;\n+import org.elasticsearch.index.mapper.LegacyDateFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MappedFieldType.Relation;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.test.AbstractQueryTestCase;\n import org.joda.time.DateTime;\n@@ -118,7 +121,8 @@ protected Map<String, RangeQueryBuilder> getAlternateVersions() {\n \n @Override\n protected void doAssertLuceneQuery(RangeQueryBuilder queryBuilder, Query query, SearchContext context) throws IOException {\n- if (getCurrentTypes().length == 0 || (queryBuilder.fieldName().equals(DATE_FIELD_NAME) == false && queryBuilder.fieldName().equals(INT_FIELD_NAME) == false)) {\n+ if (getCurrentTypes().length == 0 || (queryBuilder.fieldName().equals(DATE_FIELD_NAME) == false\n+ && queryBuilder.fieldName().equals(INT_FIELD_NAME) == false)) {\n assertThat(query, instanceOf(TermRangeQuery.class));\n TermRangeQuery termRangeQuery = (TermRangeQuery) query;\n assertThat(termRangeQuery.getField(), equalTo(queryBuilder.fieldName()));\n@@ -127,7 +131,68 @@ protected void doAssertLuceneQuery(RangeQueryBuilder queryBuilder, Query query,\n assertThat(termRangeQuery.includesLower(), equalTo(queryBuilder.includeLower()));\n assertThat(termRangeQuery.includesUpper(), equalTo(queryBuilder.includeUpper()));\n } else if (queryBuilder.fieldName().equals(DATE_FIELD_NAME)) {\n- //we can't properly test unmapped dates because LateParsingQuery is package private\n+ assertThat(query, either(instanceOf(LegacyNumericRangeQuery.class)).or(instanceOf(PointRangeQuery.class)));\n+ MapperService mapperService = context.getQueryShardContext().getMapperService();\n+ MappedFieldType mappedFieldType = mapperService.fullName(DATE_FIELD_NAME);\n+ final Long fromInMillis;\n+ final Long toInMillis;\n+ // we have to normalize the incoming value into milliseconds since it could be literally anything\n+ if (mappedFieldType instanceof LegacyDateFieldMapper.DateFieldType) {\n+ fromInMillis = queryBuilder.from() == null ? null :\n+ ((LegacyDateFieldMapper.DateFieldType) mappedFieldType).parseToMilliseconds(queryBuilder.from(),\n+ queryBuilder.includeLower(),\n+ queryBuilder.getDateTimeZone(),\n+ queryBuilder.getForceDateParser(), context.getQueryShardContext());\n+ toInMillis = queryBuilder.to() == null ? null :\n+ ((LegacyDateFieldMapper.DateFieldType) mappedFieldType).parseToMilliseconds(queryBuilder.to(),\n+ queryBuilder.includeUpper(),\n+ queryBuilder.getDateTimeZone(),\n+ queryBuilder.getForceDateParser(), context.getQueryShardContext());\n+ } else if (mappedFieldType instanceof DateFieldMapper.DateFieldType) {\n+ fromInMillis = queryBuilder.from() == null ? null :\n+ ((DateFieldMapper.DateFieldType) mappedFieldType).parseToMilliseconds(queryBuilder.from(),\n+ queryBuilder.includeLower(),\n+ queryBuilder.getDateTimeZone(),\n+ queryBuilder.getForceDateParser(), context.getQueryShardContext());\n+ toInMillis = queryBuilder.to() == null ? null :\n+ ((DateFieldMapper.DateFieldType) mappedFieldType).parseToMilliseconds(queryBuilder.to(),\n+ queryBuilder.includeUpper(),\n+ queryBuilder.getDateTimeZone(),\n+ queryBuilder.getForceDateParser(), context.getQueryShardContext());\n+ } else {\n+ fromInMillis = toInMillis = null;\n+ fail(\"unexpected mapped field type: [\" + mappedFieldType.getClass() + \"] \" + mappedFieldType.toString());\n+ }\n+\n+ if (query instanceof LegacyNumericRangeQuery) {\n+ LegacyNumericRangeQuery numericRangeQuery = (LegacyNumericRangeQuery) query;\n+ assertThat(numericRangeQuery.getField(), equalTo(queryBuilder.fieldName()));\n+ assertThat(numericRangeQuery.getMin(), equalTo(fromInMillis));\n+ assertThat(numericRangeQuery.getMax(), equalTo(toInMillis));\n+ assertThat(numericRangeQuery.includesMin(), equalTo(queryBuilder.includeLower()));\n+ assertThat(numericRangeQuery.includesMax(), equalTo(queryBuilder.includeUpper()));\n+ } else {\n+ Long min = fromInMillis;\n+ Long max = toInMillis;\n+ long minLong, maxLong;\n+ if (min == null) {\n+ minLong = Long.MIN_VALUE;\n+ } else {\n+ minLong = min.longValue();\n+ if (queryBuilder.includeLower() == false && minLong != Long.MAX_VALUE) {\n+ minLong++;\n+ }\n+ }\n+ if (max == null) {\n+ maxLong = Long.MAX_VALUE;\n+ } else {\n+ maxLong = max.longValue();\n+ if (queryBuilder.includeUpper() == false && maxLong != Long.MIN_VALUE) {\n+ maxLong--;\n+ }\n+ }\n+ assertEquals(LongPoint.newRangeQuery(DATE_FIELD_NAME, minLong, maxLong), query);\n+ }\n } else if (queryBuilder.fieldName().equals(INT_FIELD_NAME)) {\n assertThat(query, either(instanceOf(LegacyNumericRangeQuery.class)).or(instanceOf(PointRangeQuery.class)));\n if (query instanceof LegacyNumericRangeQuery) {\n@@ -157,11 +222,7 @@ protected void doAssertLuceneQuery(RangeQueryBuilder queryBuilder, Query query,\n maxInt--;\n }\n }\n- try {\n assertEquals(IntPoint.newRangeQuery(INT_FIELD_NAME, minInt, maxInt), query);\n- }catch(AssertionError e) {\n- throw e;\n- }\n }\n } else {\n throw new UnsupportedOperationException();\n@@ -228,7 +289,7 @@ public void testDateRangeQueryFormat() throws IOException {\n \" }\\n\" +\n \" }\\n\" +\n \"}\";\n- Query parsedQuery = parseQuery(query).toQuery(createShardContext()).rewrite(null);\n+ Query parsedQuery = parseQuery(query).toQuery(createShardContext());\n assertThat(parsedQuery, either(instanceOf(LegacyNumericRangeQuery.class)).or(instanceOf(PointRangeQuery.class)));\n \n if (parsedQuery instanceof LegacyNumericRangeQuery) {\n@@ -256,8 +317,7 @@ public void testDateRangeQueryFormat() throws IOException {\n \" }\\n\" +\n \" }\\n\" +\n \"}\";\n- Query rewrittenQuery = parseQuery(invalidQuery).toQuery(createShardContext());\n- expectThrows(ElasticsearchParseException.class, () -> rewrittenQuery.rewrite(null));\n+ expectThrows(ElasticsearchParseException.class, () -> parseQuery(invalidQuery).toQuery(createShardContext()));\n }\n \n public void testDateRangeBoundaries() throws IOException {\n@@ -270,7 +330,7 @@ public void testDateRangeBoundaries() throws IOException {\n \" }\\n\" +\n \" }\\n\" +\n \"}\\n\";\n- Query parsedQuery = parseQuery(query).toQuery(createShardContext()).rewrite(null);\n+ Query parsedQuery = parseQuery(query).toQuery(createShardContext());\n assertThat(parsedQuery, either(instanceOf(LegacyNumericRangeQuery.class)).or(instanceOf(PointRangeQuery.class)));\n if (parsedQuery instanceof LegacyNumericRangeQuery) {\n LegacyNumericRangeQuery rangeQuery = (LegacyNumericRangeQuery) parsedQuery;\n@@ -297,7 +357,7 @@ public void testDateRangeBoundaries() throws IOException {\n \" }\\n\" +\n \" }\\n\" +\n \"}\";\n- parsedQuery = parseQuery(query).toQuery(createShardContext()).rewrite(null);\n+ parsedQuery = parseQuery(query).toQuery(createShardContext());\n assertThat(parsedQuery, either(instanceOf(LegacyNumericRangeQuery.class)).or(instanceOf(PointRangeQuery.class)));\n if (parsedQuery instanceof LegacyNumericRangeQuery) {\n LegacyNumericRangeQuery rangeQuery = (LegacyNumericRangeQuery) parsedQuery;\n@@ -330,7 +390,7 @@ public void testDateRangeQueryTimezone() throws IOException {\n \" }\\n\" +\n \"}\";\n QueryShardContext context = createShardContext();\n- Query parsedQuery = parseQuery(query).toQuery(context).rewrite(null);\n+ Query parsedQuery = parseQuery(query).toQuery(context);\n if (parsedQuery instanceof PointRangeQuery) {\n // TODO what can we assert\n } else {", "filename": "core/src/test/java/org/elasticsearch/index/query/RangeQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -2974,12 +2974,12 @@ public void testHighlightQueryRewriteDatesWithNow() throws Exception {\n .preTags(\"<x>\")\n .postTags(\"</x>\")\n ).setQuery(QueryBuilders.boolQuery().must(\n- QueryBuilders.rangeQuery(\"d\").gte(\"now-7d/d\").lte(\"now\").includeLower(true).includeUpper(true).boost(1.0f))\n+ QueryBuilders.rangeQuery(\"d\").gte(\"now-12h\").lte(\"now\").includeLower(true).includeUpper(true).boost(1.0f))\n .should(QueryBuilders.termQuery(\"field\", \"hello\")))\n .get();\n \n assertSearchResponse(r1);\n- assertThat(r1.getHits().getTotalHits(), equalTo(3L));\n+ assertThat(r1.getHits().getTotalHits(), equalTo(1L));\n assertHighlight(r1, 0, \"field\", 0, 1,\n equalTo(\"<x>hello</x> world\"));\n }", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java", "status": "modified" } ] }
{ "body": "Replication request may arrive at a replica before the replica's node has processed a required mapping update. In these cases the TransportReplicationAction will retry the request once a new cluster state arrives. Sadly that retry logic failed to call `ReplicationRequest#onRetry`, causing duplicates in the append only use case.\n\nThis PR fixes this and also the test which missed the check. I also added an assertion which would have helped finding the source of the duplicates.\n\nThis was discovered by https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-unix-compatibility/os=opensuse/174/\n\nThe test also surfaces an issue with mapping updates on the master (they are potentially performed on a live index :( ) but this will be fixed in another PR.\n\nRelates #20211\n", "comments": [ { "body": "bummer that we missed that one... glad we have it fixed now. \n", "created_at": "2016-10-30T18:56:07Z" }, { "body": "LGTM except of the assert\n", "created_at": "2016-10-30T18:57:58Z" }, { "body": "LGTM\n", "created_at": "2016-10-31T10:04:08Z" }, { "body": "test this please\n", "created_at": "2016-10-31T10:04:16Z" }, { "body": "thx @s1monw . I'll give it a few hours on CI before back porting.\n", "created_at": "2016-10-31T12:44:13Z" }, { "body": "++ thanks @bleskes \n", "created_at": "2016-10-31T14:25:00Z" }, { "body": "This is now pushed to 5.0.1 & 5.1.0 as well\n", "created_at": "2016-11-01T10:09:03Z" } ], "number": 21189, "title": "Retrying replication requests on replica doesn't call `onRetry`" }
{ "body": "When processing a mapping updates, the master current creates an `IndexService` and uses its mapper service to do the hard work. However, if the master is also a data node and it already has an instance of `IndexService`, we currently reuse the the `MapperService` of that instance. Sadly, since mapping updates are change the in memory objects, this means that a mapping change that can rejected later on during cluster state publishing will leave a side effect on the index in question, bypassing the cluster state safety mechanism.\r\n\r\nThis commit removes this optimization and replaces the `IndexService` creation with a direct creation of a `MapperService`. \r\n\r\nAlso, this fixes an issue multiple from multiple shards for the same field caused unneeded cluster state publishing as the current code always created a new cluster state.\r\n\r\nThis were discovered while researching #21189 \r\n", "number": 21306, "review_comments": [ { "body": "should we put it in the map before we perform the merging, so that it is closed in the finally block if the merging fails too?\n", "created_at": "2016-11-03T13:33:36Z" }, { "body": "can you indent one level more? otherwise it's not obvious what belongs to the if statement and what belongs to the inner block\n", "created_at": "2016-11-03T13:34:39Z" }, { "body": "out of curiosity, does this optimization buy much or would eg. cluster state diffs notice that the mappings did not change?\n", "created_at": "2016-11-03T13:35:44Z" }, { "body": "Can you add documentation that this mapper service may only be used fo administrative purposes, and not eg. actuallly parsing documents?\n", "created_at": "2016-11-03T13:36:15Z" }, { "body": "add docs?\n", "created_at": "2016-11-03T13:36:51Z" }, { "body": "good catch. will move\n", "created_at": "2016-11-03T15:51:23Z" }, { "body": "sure\n", "created_at": "2016-11-03T15:52:47Z" }, { "body": "diffs will indeed optimize the network transmission time away but we still force a global sync of all nodes of the cluster - the master will publish this new state (with a 2 phase commit - so two rounds) and wait for the nodes to process it. This means that if some node is a bit busy it slows things down for nothing.\n", "created_at": "2016-11-03T15:54:24Z" }, { "body": "yep. added.\n", "created_at": "2016-11-03T15:58:01Z" }, { "body": "done\n", "created_at": "2016-11-03T16:07:15Z" }, { "body": "++ I think this can buy use quite a fair bit of processing savings... yet, I wonder if we can improve the CS builder to detect this automatically (for sure not here)?\n", "created_at": "2016-11-03T16:22:35Z" }, { "body": "I wonder if we should throw and AssertionError here instead... it should not happen and should not be caught?\n", "created_at": "2016-11-03T16:23:06Z" }, { "body": "I must have missed it but don't we have to close this now? I don't see where it's closed... also on exception we should close all opened ones?\n", "created_at": "2016-11-03T16:25:11Z" }, { "body": "can we add a test that actually uses a custom field mapper and ensure that it's registered here?\n", "created_at": "2016-11-03T16:26:12Z" }, { "body": "it is closed in the finally block: `IOUtils.close(indexMapperServices.values());`\n", "created_at": "2016-11-03T16:52:06Z" }, { "body": "good question. I opted for UOE as otherwise I would have to either construct a `QueryShardContext` or return null that will explode later. I think this is the simplest?\n", "created_at": "2016-11-13T13:50:44Z" }, { "body": "I added a test\n", "created_at": "2016-11-13T19:13:52Z" }, { "body": "you answer doesn't make sense just use `throw new AssertionError(\"no index query shard context available\");` instead?\n", "created_at": "2016-11-14T15:28:32Z" }, { "body": "`assertSame`?\n", "created_at": "2016-11-14T15:29:42Z" }, { "body": "👍 \n", "created_at": "2016-11-14T15:30:00Z" } ], "title": "Uncommitted mapping updates should not efect existing indices" }
{ "commits": [ { "message": "WIP" }, { "message": "fix RareClusterStateIT.java" }, { "message": "add tests" }, { "message": "review feedback" }, { "message": "Merge remote-tracking branch 'upstream/master' into mapping_dont_reuse_index_service" }, { "message": "add a test with plugins" }, { "message": "Merge remote-tracking branch 'upstream/master' into mapping_dont_reuse_index_service" }, { "message": "review feedback" } ], "files": [ { "diff": "@@ -32,7 +32,7 @@ public class PutMappingClusterStateUpdateRequest extends IndicesClusterStateUpda\n \n private boolean updateAllTypes = false;\n \n- PutMappingClusterStateUpdateRequest() {\n+ public PutMappingClusterStateUpdateRequest() {\n \n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingClusterStateUpdateRequest.java", "status": "modified" }, { "diff": "@@ -20,9 +20,9 @@\n package org.elasticsearch.cluster.metadata;\n \n import com.carrotsearch.hppc.cursors.ObjectCursor;\n-\n import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.apache.logging.log4j.util.Supplier;\n+import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.indices.mapping.put.PutMappingClusterStateUpdateRequest;\n import org.elasticsearch.cluster.AckedClusterStateTaskListener;\n@@ -34,7 +34,6 @@\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Priority;\n-import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.inject.Inject;\n@@ -51,10 +50,8 @@\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.HashMap;\n-import java.util.HashSet;\n import java.util.List;\n import java.util.Map;\n-import java.util.Set;\n /**\n * Service responsible for submitting mapping changes\n */\n@@ -215,62 +212,58 @@ class PutMappingExecutor implements ClusterStateTaskExecutor<PutMappingClusterSt\n @Override\n public BatchResult<PutMappingClusterStateUpdateRequest> execute(ClusterState currentState,\n List<PutMappingClusterStateUpdateRequest> tasks) throws Exception {\n- Set<Index> indicesToClose = new HashSet<>();\n+ Map<Index, MapperService> indexMapperServices = new HashMap<>();\n BatchResult.Builder<PutMappingClusterStateUpdateRequest> builder = BatchResult.builder();\n try {\n- // precreate incoming indices;\n for (PutMappingClusterStateUpdateRequest request : tasks) {\n try {\n for (Index index : request.indices()) {\n final IndexMetaData indexMetaData = currentState.metaData().getIndexSafe(index);\n- if (indicesService.hasIndex(indexMetaData.getIndex()) == false) {\n- // if the index does not exists we create it once, add all types to the mapper service and\n- // close it later once we are done with mapping update\n- indicesToClose.add(indexMetaData.getIndex());\n- IndexService indexService = indicesService.createIndex(indexMetaData, Collections.emptyList());\n+ if (indexMapperServices.containsKey(indexMetaData.getIndex()) == false) {\n+ MapperService mapperService = indicesService.createIndexMapperService(indexMetaData);\n+ indexMapperServices.put(index, mapperService);\n // add mappings for all types, we need them for cross-type validation\n for (ObjectCursor<MappingMetaData> mapping : indexMetaData.getMappings().values()) {\n- indexService.mapperService().merge(mapping.value.type(), mapping.value.source(),\n+ mapperService.merge(mapping.value.type(), mapping.value.source(),\n MapperService.MergeReason.MAPPING_RECOVERY, request.updateAllTypes());\n }\n }\n }\n- currentState = applyRequest(currentState, request);\n+ currentState = applyRequest(currentState, request, indexMapperServices);\n builder.success(request);\n } catch (Exception e) {\n builder.failure(request, e);\n }\n }\n return builder.build(currentState);\n } finally {\n- for (Index index : indicesToClose) {\n- indicesService.removeIndex(index, \"created for mapping processing\");\n- }\n+ IOUtils.close(indexMapperServices.values());\n }\n }\n \n- private ClusterState applyRequest(ClusterState currentState, PutMappingClusterStateUpdateRequest request) throws IOException {\n+ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterStateUpdateRequest request,\n+ Map<Index, MapperService> indexMapperServices) throws IOException {\n String mappingType = request.type();\n CompressedXContent mappingUpdateSource = new CompressedXContent(request.source());\n final MetaData metaData = currentState.metaData();\n- final List<Tuple<IndexService, IndexMetaData>> updateList = new ArrayList<>();\n+ final List<IndexMetaData> updateList = new ArrayList<>();\n for (Index index : request.indices()) {\n- IndexService indexService = indicesService.indexServiceSafe(index);\n+ MapperService mapperService = indexMapperServices.get(index);\n // IMPORTANT: always get the metadata from the state since it get's batched\n // and if we pull it from the indexService we might miss an update etc.\n final IndexMetaData indexMetaData = currentState.getMetaData().getIndexSafe(index);\n \n- // this is paranoia... just to be sure we use the exact same indexService and metadata tuple on the update that\n+ // this is paranoia... just to be sure we use the exact same metadata tuple on the update that\n // we used for the validation, it makes this mechanism little less scary (a little)\n- updateList.add(new Tuple<>(indexService, indexMetaData));\n+ updateList.add(indexMetaData);\n // try and parse it (no need to add it here) so we can bail early in case of parsing exception\n DocumentMapper newMapper;\n- DocumentMapper existingMapper = indexService.mapperService().documentMapper(request.type());\n+ DocumentMapper existingMapper = mapperService.documentMapper(request.type());\n if (MapperService.DEFAULT_MAPPING.equals(request.type())) {\n // _default_ types do not go through merging, but we do test the new settings. Also don't apply the old default\n- newMapper = indexService.mapperService().parse(request.type(), mappingUpdateSource, false);\n+ newMapper = mapperService.parse(request.type(), mappingUpdateSource, false);\n } else {\n- newMapper = indexService.mapperService().parse(request.type(), mappingUpdateSource, existingMapper == null);\n+ newMapper = mapperService.parse(request.type(), mappingUpdateSource, existingMapper == null);\n if (existingMapper != null) {\n // first, simulate: just call merge and ignore the result\n existingMapper.merge(newMapper.mapping(), request.updateAllTypes());\n@@ -286,9 +279,9 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt\n for (ObjectCursor<MappingMetaData> mapping : indexMetaData.getMappings().values()) {\n String parentType = newMapper.parentFieldMapper().type();\n if (parentType.equals(mapping.value.type()) &&\n- indexService.mapperService().getParentTypes().contains(parentType) == false) {\n+ mapperService.getParentTypes().contains(parentType) == false) {\n throw new IllegalArgumentException(\"can't add a _parent field that points to an \" +\n- \"already existing type, that isn't already a parent\");\n+ \"already existing type, that isn't already a parent\");\n }\n }\n }\n@@ -306,24 +299,25 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt\n throw new InvalidTypeNameException(\"Document mapping type name can't start with '_', found: [\" + mappingType + \"]\");\n }\n MetaData.Builder builder = MetaData.builder(metaData);\n- for (Tuple<IndexService, IndexMetaData> toUpdate : updateList) {\n+ boolean updated = false;\n+ for (IndexMetaData indexMetaData : updateList) {\n // do the actual merge here on the master, and update the mapping source\n // we use the exact same indexService and metadata we used to validate above here to actually apply the update\n- final IndexService indexService = toUpdate.v1();\n- final IndexMetaData indexMetaData = toUpdate.v2();\n final Index index = indexMetaData.getIndex();\n+ final MapperService mapperService = indexMapperServices.get(index);\n CompressedXContent existingSource = null;\n- DocumentMapper existingMapper = indexService.mapperService().documentMapper(mappingType);\n+ DocumentMapper existingMapper = mapperService.documentMapper(mappingType);\n if (existingMapper != null) {\n existingSource = existingMapper.mappingSource();\n }\n- DocumentMapper mergedMapper = indexService.mapperService().merge(mappingType, mappingUpdateSource, MapperService.MergeReason.MAPPING_UPDATE, request.updateAllTypes());\n+ DocumentMapper mergedMapper = mapperService.merge(mappingType, mappingUpdateSource, MapperService.MergeReason.MAPPING_UPDATE, request.updateAllTypes());\n CompressedXContent updatedSource = mergedMapper.mappingSource();\n \n if (existingSource != null) {\n if (existingSource.equals(updatedSource)) {\n // same source, no changes, ignore it\n } else {\n+ updated = true;\n // use the merged mapping source\n if (logger.isDebugEnabled()) {\n logger.debug(\"{} update_mapping [{}] with source [{}]\", index, mergedMapper.type(), updatedSource);\n@@ -333,6 +327,7 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt\n \n }\n } else {\n+ updated = true;\n if (logger.isDebugEnabled()) {\n logger.debug(\"{} create_mapping [{}] with source [{}]\", index, mappingType, updatedSource);\n } else if (logger.isInfoEnabled()) {\n@@ -343,13 +338,16 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt\n IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData.builder(indexMetaData);\n // Mapping updates on a single type may have side-effects on other types so we need to\n // update mapping metadata on all types\n- for (DocumentMapper mapper : indexService.mapperService().docMappers(true)) {\n+ for (DocumentMapper mapper : mapperService.docMappers(true)) {\n indexMetaDataBuilder.putMapping(new MappingMetaData(mapper.mappingSource()));\n }\n builder.put(indexMetaDataBuilder);\n }\n-\n- return ClusterState.builder(currentState).metaData(builder).build();\n+ if (updated) {\n+ return ClusterState.builder(currentState).metaData(builder).build();\n+ } else {\n+ return currentState;\n+ }\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.index.cache.query.IndexQueryCache;\n import org.elasticsearch.index.cache.query.QueryCache;\n import org.elasticsearch.index.engine.EngineFactory;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexSearcherWrapper;\n import org.elasticsearch.index.shard.IndexingOperationListener;\n@@ -359,6 +360,16 @@ public IndexService newIndexService(NodeEnvironment environment, IndexService.Sh\n searchOperationListeners, indexOperationListeners);\n }\n \n+ /**\n+ * creates a new mapper service to do administrative work like mapping updates. This *should not* be used for document parsing.\n+ * doing so will result in an exception.\n+ */\n+ public MapperService newIndexMapperService(MapperRegistry mapperRegistry) throws IOException {\n+ return new MapperService(indexSettings, analysisRegistry.build(indexSettings),\n+ new SimilarityService(indexSettings, similarities), mapperRegistry,\n+ () -> { throw new UnsupportedOperationException(\"no index query shard context available\"); });\n+ }\n+\n /**\n * Forces a certain query cache to use instead of the default one. If this is set\n * and query caching is not disabled with {@code index.queries.cache.enabled}, then", "filename": "core/src/main/java/org/elasticsearch/index/IndexModule.java", "status": "modified" }, { "diff": "@@ -93,7 +93,6 @@\n public class IndexService extends AbstractIndexComponent implements IndicesClusterStateService.AllocatedIndex<IndexShard> {\n \n private final IndexEventListener eventListener;\n- private final IndexAnalyzers indexAnalyzers;\n private final IndexFieldDataService indexFieldData;\n private final BitsetFilterCache bitsetFilterCache;\n private final NodeEnvironment nodeEnv;\n@@ -142,12 +141,11 @@ public IndexService(IndexSettings indexSettings, NodeEnvironment nodeEnv,\n List<IndexingOperationListener> indexingOperationListeners) throws IOException {\n super(indexSettings);\n this.indexSettings = indexSettings;\n- this.indexAnalyzers = registry.build(indexSettings);\n this.similarityService = similarityService;\n- this.mapperService = new MapperService(indexSettings, indexAnalyzers, similarityService, mapperRegistry,\n+ this.mapperService = new MapperService(indexSettings, registry.build(indexSettings), similarityService, mapperRegistry,\n // we parse all percolator queries as they would be parsed on shard 0\n () -> newQueryShardContext(0, null, () -> {\n- throw new IllegalArgumentException(\"Percolator queries are not allowed to use the curent timestamp\");\n+ throw new IllegalArgumentException(\"Percolator queries are not allowed to use the current timestamp\");\n }));\n this.indexFieldData = new IndexFieldDataService(indexSettings, indicesFieldDataCache, circuitBreakerService, mapperService);\n this.shardStoreDeleter = shardStoreDeleter;\n@@ -225,7 +223,7 @@ public IndexFieldDataService fieldData() {\n }\n \n public IndexAnalyzers getIndexAnalyzers() {\n- return this.indexAnalyzers;\n+ return this.mapperService.getIndexAnalyzers();\n }\n \n public MapperService mapperService() {\n@@ -249,7 +247,7 @@ public synchronized void close(final String reason, boolean delete) throws IOExc\n }\n }\n } finally {\n- IOUtils.close(bitsetFilterCache, indexCache, indexFieldData, indexAnalyzers, refreshTask, fsyncTask);\n+ IOUtils.close(bitsetFilterCache, indexCache, indexFieldData, mapperService, refreshTask, fsyncTask);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/IndexService.java", "status": "modified" }, { "diff": "@@ -44,6 +44,7 @@\n import org.elasticsearch.indices.TypeMissingException;\n import org.elasticsearch.indices.mapper.MapperRegistry;\n \n+import java.io.Closeable;\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Collection;\n@@ -62,7 +63,7 @@\n import static java.util.Collections.unmodifiableMap;\n import static org.elasticsearch.common.collect.MapBuilder.newMapBuilder;\n \n-public class MapperService extends AbstractIndexComponent {\n+public class MapperService extends AbstractIndexComponent implements Closeable {\n \n /**\n * The reason why a mapping is being merged.\n@@ -624,6 +625,11 @@ public Set<String> getParentTypes() {\n return parentTypes;\n }\n \n+ @Override\n+ public void close() throws IOException {\n+ indexAnalyzers.close();\n+ }\n+\n /**\n * @return Whether a field is a metadata field.\n */", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java", "status": "modified" }, { "diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.indices;\n \n import com.carrotsearch.hppc.cursors.ObjectCursor;\n-\n import org.apache.logging.log4j.Logger;\n import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.apache.lucene.index.DirectoryReader;\n@@ -430,6 +429,21 @@ private synchronized IndexService createIndexService(final String reason, IndexM\n indicesQueriesRegistry, clusterService, client, indicesQueryCache, mapperRegistry, indicesFieldDataCache);\n }\n \n+ /**\n+ * creates a new mapper service for the given index, in order to do administrative work like mapping updates.\n+ * This *should not* be used for document parsing. Doing so will result in an exception.\n+ *\n+ * Note: the returned {@link MapperService} should be closed when unneeded.\n+ */\n+ public synchronized MapperService createIndexMapperService(IndexMetaData indexMetaData) throws IOException {\n+ final Index index = indexMetaData.getIndex();\n+ final Predicate<String> indexNameMatcher = (indexExpression) -> indexNameExpressionResolver.matchesIndex(index.getName(), indexExpression, clusterService.state());\n+ final IndexSettings idxSettings = new IndexSettings(indexMetaData, this.settings, indexNameMatcher, indexScopeSetting);\n+ final IndexModule indexModule = new IndexModule(idxSettings, indexStoreConfig, analysisRegistry);\n+ pluginsService.onIndexModule(indexModule);\n+ return indexModule.newIndexMapperService(mapperRegistry);\n+ }\n+\n /**\n * This method verifies that the given {@code metaData} holds sane values to create an {@link IndexService}.\n * This method tries to update the meta data of the created {@link IndexService} if the given {@code metaDataUpdate} is different from the given {@code metaData}.", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -18,11 +18,17 @@\n */\n package org.elasticsearch.cluster.metadata;\n \n+import org.elasticsearch.action.admin.indices.mapping.put.PutMappingClusterStateUpdateRequest;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n+import java.util.Collections;\n+\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n \n@@ -63,4 +69,34 @@ public void testAddExtraChildTypePointingToAlreadyParentExistingType() throws Ex\n assertThat(documentMapper.parentFieldMapper().active(), is(true));\n }\n \n+ public void testMappingClusterStateUpdateDoesntChangeExistingIndices() throws Exception {\n+ final IndexService indexService = createIndex(\"test\", client().admin().indices().prepareCreate(\"test\").addMapping(\"type\"));\n+ final CompressedXContent currentMapping = indexService.mapperService().documentMapper(\"type\").mappingSource();\n+\n+ final MetaDataMappingService mappingService = getInstanceFromNode(MetaDataMappingService.class);\n+ final ClusterService clusterService = getInstanceFromNode(ClusterService.class);\n+ // TODO - it will be nice to get a random mapping generator\n+ final PutMappingClusterStateUpdateRequest request = new PutMappingClusterStateUpdateRequest().type(\"type\");\n+ request.source(\"{ \\\"properties\\\" { \\\"field\\\": { \\\"type\\\": \\\"string\\\" }}}\");\n+ mappingService.putMappingExecutor.execute(clusterService.state(), Collections.singletonList(request));\n+ assertThat(indexService.mapperService().documentMapper(\"type\").mappingSource(), equalTo(currentMapping));\n+ }\n+\n+ public void testClusterStateIsNotChangedWithIdenticalMappings() throws Exception {\n+ createIndex(\"test\", client().admin().indices().prepareCreate(\"test\").addMapping(\"type\"));\n+\n+ final MetaDataMappingService mappingService = getInstanceFromNode(MetaDataMappingService.class);\n+ final ClusterService clusterService = getInstanceFromNode(ClusterService.class);\n+ final PutMappingClusterStateUpdateRequest request = new PutMappingClusterStateUpdateRequest().type(\"type\");\n+ request.source(\"{ \\\"properties\\\" { \\\"field\\\": { \\\"type\\\": \\\"string\\\" }}}\");\n+ ClusterState result = mappingService.putMappingExecutor.execute(clusterService.state(), Collections.singletonList(request))\n+ .resultingState;\n+\n+ assertFalse(result != clusterService.state());\n+\n+ ClusterState result2 = mappingService.putMappingExecutor.execute(result, Collections.singletonList(request))\n+ .resultingState;\n+\n+ assertSame(result, result2);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataMappingServiceTests.java", "status": "modified" }, { "diff": "@@ -35,16 +35,27 @@\n import org.elasticsearch.gateway.LocalAllocateDangledIndices;\n import org.elasticsearch.gateway.MetaStateService;\n import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexModule;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.mapper.Mapper;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.StringFieldMapper;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.ShardPath;\n+import org.elasticsearch.index.similarity.BM25SimilarityProvider;\n import org.elasticsearch.indices.IndicesService.ShardDeletionCheckResult;\n+import org.elasticsearch.plugins.MapperPlugin;\n+import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.elasticsearch.test.IndexSettingsModule;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.Map;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n \n@@ -53,6 +64,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.not;\n \n public class IndicesServiceTests extends ESSingleNodeTestCase {\n@@ -65,6 +77,30 @@ public NodeEnvironment getNodeEnvironment() {\n return getInstanceFromNode(NodeEnvironment.class);\n }\n \n+ @Override\n+ protected Collection<Class<? extends Plugin>> getPlugins() {\n+ ArrayList<Class<? extends Plugin>> plugins = new ArrayList<>(super.getPlugins());\n+ plugins.add(TestPlugin.class);\n+ return plugins;\n+ }\n+\n+ public static class TestPlugin extends Plugin implements MapperPlugin {\n+\n+ public TestPlugin() {}\n+\n+ @Override\n+ public Map<String, Mapper.TypeParser> getMappers() {\n+ return Collections.singletonMap(\"fake-mapper\", new StringFieldMapper.TypeParser());\n+ }\n+\n+ @Override\n+ public void onIndexModule(IndexModule indexModule) {\n+ super.onIndexModule(indexModule);\n+ indexModule.addSimilarity(\"fake-similarity\", BM25SimilarityProvider::new);\n+ }\n+ }\n+\n+\n @Override\n protected boolean resetNodeAfterTest() {\n return true;\n@@ -328,4 +364,26 @@ public void onFailure(Throwable e) {\n }\n }\n \n+ /**\n+ * Tests that teh {@link MapperService} created by {@link IndicesService#createIndexMapperService(IndexMetaData)} contains\n+ * custom types and similarities registered by plugins\n+ */\n+ public void testStandAloneMapperServiceWithPlugins() throws IOException {\n+ final String indexName = \"test\";\n+ final Index index = new Index(indexName, UUIDs.randomBase64UUID());\n+ final IndicesService indicesService = getIndicesService();\n+ final Settings idxSettings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexModule.SIMILARITY_SETTINGS_PREFIX + \".test.type\", \"fake-similarity\")\n+ .build();\n+ final IndexMetaData indexMetaData = new IndexMetaData.Builder(index.getName())\n+ .settings(idxSettings)\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)\n+ .build();\n+ MapperService mapperService = indicesService.createIndexMapperService(indexMetaData);\n+ assertNotNull(mapperService.documentMapperParser().parserContext(\"type\").typeParser(\"fake-mapper\"));\n+ assertThat(mapperService.documentMapperParser().parserContext(\"type\").getSimilarity(\"test\"),\n+ instanceOf(BM25SimilarityProvider.class));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesServiceTests.java", "status": "modified" }, { "diff": "@@ -43,6 +43,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.discovery.DiscoverySettings;\n+import org.elasticsearch.discovery.zen.ElectMasterService;\n import org.elasticsearch.gateway.GatewayAllocator;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n@@ -324,7 +325,12 @@ public void testDelayedMappingPropagationOnReplica() throws Exception {\n // Here we want to test that everything goes well if the mappings that\n // are needed for a document are not available on the replica at the\n // time of indexing it\n- final List<String> nodeNames = internalCluster().startNodesAsync(2).get();\n+ final List<String> nodeNames = internalCluster().startNodesAsync(2,\n+ Settings.builder()\n+ .put(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey(), 2)\n+ .put(DiscoverySettings.COMMIT_TIMEOUT_SETTING.getKey(), \"30s\") // explicitly set so it won't default to publish timeout\n+ .put(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey(), \"0s\") // don't wait post commit as we are blocking things by design\n+ .build()).get();\n assertFalse(client().admin().cluster().prepareHealth().setWaitForNodes(\"2\").get().isTimedOut());\n \n final String master = internalCluster().getMasterName();", "filename": "core/src/test/java/org/elasticsearch/indices/state/RareClusterStateIT.java", "status": "modified" } ] }
{ "body": "Reported at https://discuss.elastic.co/t/5-0-cant-have-terms-query-within-a-filter-aggregation/64768\r\n\r\nHere is a recreation:\r\n\r\n```\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"post\": {\r\n \"properties\": {\r\n \"mentionIDs\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n },\r\n \"user\": {\r\n \"properties\": {\r\n \"notifications\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT test/user/USR|CLIENTID|1234\r\n{\r\n \"notifications\": [\"abc\"]\r\n}\r\n\r\nPUT test/post/POST|4321\r\n{\r\n \"mentionIDs\": [\"abc\"]\r\n}\r\n\r\nGET _search\r\n{\r\n \"aggs\": {\r\n \"itemsNotify\": {\r\n \"filter\": {\r\n \"terms\": {\r\n \"mentionIDs\": {\r\n \"index\": \"users\",\r\n \"type\": \"user\",\r\n \"id\": \"USR|CLIENTID|1234\",\r\n \"path\": \"notifications\",\r\n \"routing\": \"CLIENTID\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nwhich fails with:\r\n\r\n```\r\n[elasticsearch] [2016-11-03T11:28:45,410][WARN ][r.suppressed ] path: /_search, params: {}\r\n[elasticsearch] org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed\r\n[elasticsearch] \tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onFirstPhaseResult(AbstractSearchAsyncAction.java:204) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:139) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:51) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:980) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1081) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1059) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.transport.TransportService$6.onFailure(TransportService.java:585) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:490) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_66-ea]\r\n[elasticsearch] \tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_66-ea]\r\n[elasticsearch] \tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_66-ea]\r\n[elasticsearch] Caused by: org.elasticsearch.transport.RemoteTransportException: [EFHp1JN][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\n[elasticsearch] Caused by: java.lang.UnsupportedOperationException: query must be rewritten first\r\n[elasticsearch] \tat org.elasticsearch.index.query.TermsQueryBuilder.doToQuery(TermsQueryBuilder.java:317) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:95) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.search.aggregations.bucket.filter.FilterAggregatorFactory.<init>(FilterAggregatorFactory.java:45) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder.doBuild(FilterAggregationBuilder.java:75) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.search.aggregations.AbstractAggregationBuilder.build(AbstractAggregationBuilder.java:126) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.search.aggregations.AggregatorFactories$Builder.build(AggregatorFactories.java:208) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.search.SearchService.parseSource(SearchService.java:730) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.search.SearchService.createContext(SearchService.java:554) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:530) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:265) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:300) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:297) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:574) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:504) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] \tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n```", "comments": [ { "body": "@colings86 could you take a look?\n", "created_at": "2016-11-03T10:33:12Z" }, { "body": "@jpountz @colings86 I will look\n", "created_at": "2016-11-03T10:35:52Z" } ], "number": 21301, "title": "Terms lookup filter in filter aggregation complains that the query is not rewritten" }
{ "body": "`FilterAggregationBuilder` today misses to rewrite queries which causes failures\r\nif a query that uses a client for instance to lookup terms since it must be rewritten first.\r\nThis change also ensures that if a client is used from the rewrite context we mark the query as\r\nnon-cacheable.\r\n\r\nCloses #21301", "number": 21303, "review_comments": [ { "body": "++ we have an issue for this here: https://github.com/elastic/elasticsearch/issues/17676\n", "created_at": "2016-11-03T13:23:34Z" } ], "title": "Rewrite Queries/Filter in FilterAggregationBuilder and ensure client usage marks query as non-cachable" }
{ "commits": [ { "message": "Rewrite Queries/Filter in FilterAggregationBuilder and ensure client usage marks query as non-cachable\n\n`FilterAggregationBuilder` today misses to rewrite queries which causes failures\nif a query that uses a client for instance to lookup terms since it must be rewritten first.\nThis change also ensures that if a client is used from the rewrite context we mark the query as\nnon-cacheable.\n\nCloses #21301" }, { "message": "split sections for BWC" }, { "message": "fix MTQ tests" }, { "message": "ensure every shard has at least one value" } ], "files": [ { "diff": "@@ -67,7 +67,7 @@ public QueryRewriteContext(IndexSettings indexSettings, MapperService mapperServ\n /**\n * Returns a clients to fetch resources from local or remove nodes.\n */\n- public final Client getClient() {\n+ public Client getClient() {\n return client;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryRewriteContext.java", "status": "modified" }, { "diff": "@@ -421,4 +421,9 @@ public final long nowInMillis() {\n return super.nowInMillis();\n }\n \n+ @Override\n+ public Client getClient() {\n+ failIfFrozen(); // we somebody uses a terms filter with lookup for instance can't be cached...\n+ return super.getClient();\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java", "status": "modified" }, { "diff": "@@ -72,7 +72,9 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n @Override\n protected AggregatorFactory<?> doBuild(AggregationContext context, AggregatorFactory<?> parent,\n AggregatorFactories.Builder subFactoriesBuilder) throws IOException {\n- return new FilterAggregatorFactory(name, type, filter, context, parent, subFactoriesBuilder, metaData);\n+ // TODO this sucks we need a rewrite phase for aggregations too\n+ final QueryBuilder rewrittenFilter = QueryBuilder.rewriteQuery(filter, context.searchContext().getQueryShardContext());\n+ return new FilterAggregatorFactory(name, type, rewrittenFilter, context, parent, subFactoriesBuilder, metaData);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -301,6 +301,11 @@ public void testItemFromXContent() throws IOException {\n assertEquals(expectedItem, newItem);\n }\n \n+ @Override\n+ protected boolean isCachable(MoreLikeThisQueryBuilder queryBuilder) {\n+ return queryBuilder.likeItems().length == 0; // items are always fetched\n+ }\n+\n public void testFromJson() throws IOException {\n String json =\n \"{\\n\" +", "filename": "core/src/test/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -289,5 +289,12 @@ public void testGeo() throws Exception {\n assertEquals(\"Geo fields do not support exact searching, use dedicated geo queries instead: [mapped_geo_point]\",\n e.getMessage());\n }\n+\n+ @Override\n+ protected boolean isCachable(TermsQueryBuilder queryBuilder) {\n+ // even though we use a terms lookup here we do this during rewrite and that means we are cachable on toQuery\n+ // that's why we return true here all the time\n+ return super.isCachable(queryBuilder);\n+ }\n }\n ", "filename": "core/src/test/java/org/elasticsearch/index/query/TermsQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -362,7 +362,7 @@ public void testQueryRewriteDatesWithNow() throws Exception {\n public void testCanCache() throws Exception {\n assertAcked(client().admin().indices().prepareCreate(\"index\").addMapping(\"type\", \"s\", \"type=date\")\n .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, IndexMetaData.SETTING_NUMBER_OF_SHARDS,\n- 5, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ 2, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n .get());\n indexRandom(true, client().prepareIndex(\"index\", \"type\", \"1\").setRouting(\"1\").setSource(\"s\", \"2016-03-19\"),\n client().prepareIndex(\"index\", \"type\", \"2\").setRouting(\"1\").setSource(\"s\", \"2016-03-20\"),\n@@ -411,7 +411,7 @@ public void testCanCache() throws Exception {\n assertThat(client().admin().indices().prepareStats(\"index\").setRequestCache(true).get().getTotal().getRequestCache().getMissCount(),\n equalTo(0L));\n \n- // If the request has an aggregation containng now we should not cache\n+ // If the request has an aggregation containing now we should not cache\n final SearchResponse r4 = client().prepareSearch(\"index\").setSearchType(SearchType.QUERY_THEN_FETCH).setSize(0)\n .setRequestCache(true).setQuery(QueryBuilders.rangeQuery(\"s\").gte(\"2016-03-20\").lte(\"2016-03-26\"))\n .addAggregation(filter(\"foo\", QueryBuilders.rangeQuery(\"s\").from(\"now-10y\").to(\"now\"))).get();\n@@ -441,7 +441,7 @@ public void testCanCache() throws Exception {\n assertThat(client().admin().indices().prepareStats(\"index\").setRequestCache(true).get().getTotal().getRequestCache().getHitCount(),\n equalTo(0L));\n assertThat(client().admin().indices().prepareStats(\"index\").setRequestCache(true).get().getTotal().getRequestCache().getMissCount(),\n- equalTo(5L));\n+ equalTo(2L));\n }\n \n public void testCacheWithFilteredAlias() {", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesRequestCacheIT.java", "status": "modified" }, { "diff": "@@ -0,0 +1,80 @@\n+setup:\n+ - do:\n+ indices.create:\n+ index: test\n+ body:\n+ settings:\n+ number_of_shards: 1\n+ number_of_replicas: 0\n+ mappings:\n+ post:\n+ properties:\n+ mentions:\n+ type: keyword\n+ user:\n+ properties:\n+ notifications:\n+ type: keyword\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: foo|bar|baz0\n+ body: { \"notifications\" : [\"abc\"] }\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: foo|bar|baz1\n+ body: { \"mentions\" : [\"abc\"] }\n+\n+ - do:\n+ indices.refresh: {}\n+\n+---\n+\"Filter aggs with terms lookup ensure not cached\":\n+ - skip:\n+ version: \" - 5.0.0\"\n+ reason: This using filter aggs that needs rewriting, this was fixed in 5.0.1\n+\n+ - do:\n+ search:\n+ size: 0\n+ request_cache: true\n+ body: {\"aggs\": { \"itemsNotify\": { \"filter\": { \"terms\": { \"mentions\": { \"index\": \"test\", \"type\": \"test\", \"id\": \"foo|bar|baz0\", \"path\": \"notifications\"}}}, \"aggs\": { \"mentions\" : {\"terms\" : { \"field\" : \"mentions\" }}}}}}\n+\n+ # validate result\n+ - match: { hits.total: 2 }\n+ - match: { aggregations.itemsNotify.doc_count: 1 }\n+ - length: { aggregations.itemsNotify.mentions.buckets: 1 }\n+ - match: { aggregations.itemsNotify.mentions.buckets.0.key: \"abc\" }\n+ # we are using a lookup - this should not cache\n+ - do:\n+ indices.stats: { index: test, metric: request_cache}\n+ - match: { _shards.total: 1 }\n+ - match: { _all.total.request_cache.hit_count: 0 }\n+ - match: { _all.total.request_cache.miss_count: 0 }\n+ - is_true: indices.test\n+\n+---\n+\"Filter aggs no lookup and ensure it's cached\":\n+ # now run without lookup and ensure we get cached or at least do the lookup\n+ - do:\n+ search:\n+ size: 0\n+ request_cache: true\n+ body: {\"aggs\": { \"itemsNotify\": { \"filter\": { \"terms\": { \"mentions\": [\"abc\"]}}, \"aggs\": { \"mentions\" : {\"terms\" : { \"field\" : \"mentions\" }}}}}}\n+\n+ - match: { hits.total: 2 }\n+ - match: { aggregations.itemsNotify.doc_count: 1 }\n+ - length: { aggregations.itemsNotify.mentions.buckets: 1 }\n+ - match: { aggregations.itemsNotify.mentions.buckets.0.key: \"abc\" }\n+ - do:\n+ indices.stats: { index: test, metric: request_cache}\n+ - match: { _shards.total: 1 }\n+ - match: { _all.total.request_cache.hit_count: 0 }\n+ - match: { _all.total.request_cache.miss_count: 1 }\n+ - is_true: indices.test\n+", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/50_filter.yaml", "status": "added" } ] }
{ "body": "`elasticsearch-plugin` spits out the usage information if you ask it to remove a plugin that isn't installed. It really shouldn't do that. Example\r\n\r\n```\r\nmanyair:x-plugins manybubbles$ backwards/elasticsearch-5.0.0/bin/elasticsearch-plugin remove x-pack\r\n-> Removing x-pack...\r\nA tool for managing installed elasticsearch plugins\r\n\r\nCommands\r\n--------\r\nlist - Lists installed elasticsearch plugins\r\ninstall - Install a plugin\r\nremove - Removes a plugin from elasticsearch\r\n\r\nNon-option arguments:\r\ncommand \r\n\r\nOption Description \r\n------ ----------- \r\n-h, --help show help \r\n-s, --silent show minimal output\r\n-v, --verbose show verbose output\r\nERROR: plugin x-pack not found; run 'elasticsearch-plugin list' to get list of installed plugins\r\n```\r\n\r\nI couldn't see the error because of all the usage information....", "comments": [], "number": 21250, "title": "elasticsearch-plugin spits out the usage information if you ask it to remove a plugin that isn't installed" }
{ "body": "The usage information for `elasticsearch-plugin` is quiet verbose and makes the actual error message that is shown when trying to remove a non-existing plugin hard to spot. This changes the error code to not trigger printing the usage information.\r\n\r\nCloses #21250", "number": 21272, "review_comments": [ { "body": "I'm not sure about this, it is a usage error.\n", "created_at": "2016-11-02T16:27:24Z" }, { "body": "To be clear, I don't think we should change this to an I/O error. I think I'd be okay with a configuration error though.\n", "created_at": "2016-11-02T16:37:19Z" }, { "body": "Thats fine, printing the help messages is tied to `ExitCodes.USAGE` in o.e.cli.Command, so that should work too.\n", "created_at": "2016-11-02T16:48:30Z" }, { "body": "Yeah, exactly, and I think usage should really be reserved for incompatible or invalid arguments, for example. This is more a state thing, so now I think I'm convincing myself that configuration is apt.\n", "created_at": "2016-11-02T17:00:44Z" } ], "title": "Removing plugin that isn't installed shouldn't trigger usage information" }
{ "commits": [], "files": [] }
{ "body": "**Elasticsearch version**: `Version: 5.0.0, Build: 253032b/2016-10-26T05:11:34.737Z, JVM: 1.8.0_111`\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**:\r\n\r\n```\r\njava version \"1.8.0_111\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_111-b14)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)\r\n```\r\n\r\n**OS version**:\r\n\r\n* Ubuntu 16.04.1\r\n* `Linux elastic 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux`\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nIn previous versions of elasticsearch, the `_cat/nodes` api accepted the parameter `full_id` to show the full node ID instead of an abbreviated version. It appears that this parameter is no longer accepted in `5.0`, and now returns a `400` response with an error:\r\n\r\n```json\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"request [/_cat/nodes] contains unrecognized parameter: [full_id]\"\r\n }\r\n ],\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"request [/_cat/nodes] contains unrecognized parameter: [full_id]\"\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\nI'm not very familiar with the codebase, but according to [these lines](https://github.com/elastic/elasticsearch/blob/5.0/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java#L217-L236), this parameter should still exist, correct?\r\n\r\nThis causes [elasticbeat](https://github.com/radoondas/elasticbeat/blob/master/beater/nodes.go#L18) to be unable to collect stats on individual nodes and significantly impairs its effectiveness.\r\n\r\n**Steps to reproduce**:\r\n 1. Install and start an Elasticsearch `5.0` node\r\n 2. `GET _cat/nodes?full_id=true`\r\n\r\n**Provide logs (if relevant)**:\r\n> No log lines emitted in elasticsearch logs.\r\n", "comments": [ { "body": "I believe you are right. I'll grab this in a few minutes if no one gets to it first.\n", "created_at": "2016-11-02T14:12:38Z" }, { "body": "I can do it too @nik9000 if you want.. it's consumed too late though.\n", "created_at": "2016-11-02T14:16:27Z" }, { "body": "Yeah, @jasontedor has a little spot where you can put things that might be consumed later to sneak by the check. I don't remember the name but I remember it being fairly obvious when you poke around.\n\nIf you want it you can have it.\n", "created_at": "2016-11-02T14:18:06Z" } ], "number": 21266, "title": "full_id parameter unrecognized at /_cat/nodes in 5.0" }
{ "body": "Since we now validate all consumed request parameter, users can't specify\r\n`_cat/nodes?full_id=true|false` anymore since this parameter is consumed late.\r\nThis commit adds a test for this parameter and consumes it before request is processed.\r\n\r\nCloses #21266", "number": 21270, "review_comments": [ { "body": "This is one way to do it, but I'm wondering why you opted to do it this way instead of using the infrastructure that exists for handling response parameters? Namely, override `AbstractCatAction#responseParams` (being sure to include the response params from super).\n", "created_at": "2016-11-02T16:11:33Z" }, { "body": "because that is the most obvious way to do this. I like when thinks are obvious especially when you look where this is consumed you don't need to look for it, it's right where I'd expect it.\n", "created_at": "2016-11-02T20:08:42Z" }, { "body": "This isn't where I would expect it to be consumed since it affects the output only, not the request handling. \n", "created_at": "2016-11-02T20:12:24Z" }, { "body": "that's a fair statement. I think if you put it anywhere else it's a bug. when you have request params then consume them in the request method.\n", "created_at": "2016-11-02T20:29:15Z" }, { "body": "even further I think `AbstractCatAction#responseParams` is pretty obscure and I think we should remove it if we can. It's just yet another place we need to maintain and look for params. I don't know if we really need it \n", "created_at": "2016-11-02T20:30:40Z" }, { "body": "@s1monw I'm sorry that I didn't take any time to reply last night. The situation with the response parameters is quite complicated. Look for example at `Settings#toXContent`. The situation here is that the `flat_settings` parameter is consumed there, but the signature `ToXContent#toXContent(XContentBuilder, Params)` is a general signature, we can't just go and add a boolean parameter for flat settings to the interface because it doesn't make sense in all situations. It is for this and similar reasons that I ultimately handled response parameters the way that I did. Barring a redesign, I would prefer that we remain consistent for now.\n\n> It's just yet another place we need to maintain and look for params.\n\nRight now it is how we handle output parameters.\n", "created_at": "2016-11-03T12:24:15Z" }, { "body": "yeah that's fine - it's an exception and that is why you likely added this. In this case we have no exception and doing it without hiding this parameter in some special method was possible. I strongly encourage people to do it this way. I don't think this falls under _taste_ we can't hide stuff just because we have an abstraction\n", "created_at": "2016-11-03T13:52:49Z" } ], "title": "Consume `full_id` request parameter early" }
{ "commits": [ { "message": "Consume `full_id` request parameter early\n\nSince we now validate all consumed request parameter, users can't specify\n`_cat/nodes?full_id=true|false` anymore since this parameter is consumed late.\nThis commit adds a test for this parameter and consumes it before request is processed.\n\nCloses #21266" } ], "files": [ { "diff": "@@ -459,7 +459,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]rest[/\\\\]RestController.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]rest[/\\\\]action[/\\\\]cat[/\\\\]RestCountAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]rest[/\\\\]action[/\\\\]cat[/\\\\]RestIndicesAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]rest[/\\\\]action[/\\\\]cat[/\\\\]RestNodesAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]rest[/\\\\]action[/\\\\]cat[/\\\\]RestShardsAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]rest[/\\\\]action[/\\\\]cat[/\\\\]RestThreadPoolAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]script[/\\\\]ScriptContextRegistry.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -85,7 +85,7 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli\n clusterStateRequest.clear().nodes(true);\n clusterStateRequest.local(request.paramAsBoolean(\"local\", clusterStateRequest.local()));\n clusterStateRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", clusterStateRequest.masterNodeTimeout()));\n-\n+ final boolean fullId = request.paramAsBoolean(\"full_id\", false);\n return channel -> client.admin().cluster().state(clusterStateRequest, new RestActionListener<ClusterStateResponse>(channel) {\n @Override\n public void processResponse(final ClusterStateResponse clusterStateResponse) {\n@@ -99,7 +99,8 @@ public void processResponse(final NodesInfoResponse nodesInfoResponse) {\n client.admin().cluster().nodesStats(nodesStatsRequest, new RestResponseListener<NodesStatsResponse>(channel) {\n @Override\n public RestResponse buildResponse(NodesStatsResponse nodesStatsResponse) throws Exception {\n- return RestTable.buildResponse(buildTable(request, clusterStateResponse, nodesInfoResponse, nodesStatsResponse), channel);\n+ return RestTable.buildResponse(buildTable(fullId, request, clusterStateResponse, nodesInfoResponse,\n+ nodesStatsResponse), channel);\n }\n });\n }\n@@ -129,15 +130,17 @@ protected Table getTableWithHeader(final RestRequest request) {\n table.addCell(\"ram.percent\", \"alias:rp,ramPercent;text-align:right;desc:used machine memory ratio\");\n table.addCell(\"ram.max\", \"default:false;alias:rm,ramMax;text-align:right;desc:total machine memory\");\n table.addCell(\"file_desc.current\", \"default:false;alias:fdc,fileDescriptorCurrent;text-align:right;desc:used file descriptors\");\n- table.addCell(\"file_desc.percent\", \"default:false;alias:fdp,fileDescriptorPercent;text-align:right;desc:used file descriptor ratio\");\n+ table.addCell(\"file_desc.percent\",\n+ \"default:false;alias:fdp,fileDescriptorPercent;text-align:right;desc:used file descriptor ratio\");\n table.addCell(\"file_desc.max\", \"default:false;alias:fdm,fileDescriptorMax;text-align:right;desc:max file descriptors\");\n \n table.addCell(\"cpu\", \"alias:cpu;text-align:right;desc:recent cpu usage\");\n table.addCell(\"load_1m\", \"alias:l;text-align:right;desc:1m load avg\");\n table.addCell(\"load_5m\", \"alias:l;text-align:right;desc:5m load avg\");\n table.addCell(\"load_15m\", \"alias:l;text-align:right;desc:15m load avg\");\n table.addCell(\"uptime\", \"default:false;alias:u;text-align:right;desc:node uptime\");\n- table.addCell(\"node.role\", \"alias:r,role,nodeRole;desc:m:master eligible node, d:data node, i:ingest node, -:coordinating node only\");\n+ table.addCell(\"node.role\",\n+ \"alias:r,role,nodeRole;desc:m:master eligible node, d:data node, i:ingest node, -:coordinating node only\");\n table.addCell(\"master\", \"alias:m;desc:*:current master\");\n table.addCell(\"name\", \"alias:n;desc:node name\");\n \n@@ -150,9 +153,12 @@ protected Table getTableWithHeader(final RestRequest request) {\n table.addCell(\"query_cache.evictions\", \"alias:qce,queryCacheEvictions;default:false;text-align:right;desc:query cache evictions\");\n \n table.addCell(\"request_cache.memory_size\", \"alias:rcm,requestCacheMemory;default:false;text-align:right;desc:used request cache\");\n- table.addCell(\"request_cache.evictions\", \"alias:rce,requestCacheEvictions;default:false;text-align:right;desc:request cache evictions\");\n- table.addCell(\"request_cache.hit_count\", \"alias:rchc,requestCacheHitCount;default:false;text-align:right;desc:request cache hit counts\");\n- table.addCell(\"request_cache.miss_count\", \"alias:rcmc,requestCacheMissCount;default:false;text-align:right;desc:request cache miss counts\");\n+ table.addCell(\"request_cache.evictions\",\n+ \"alias:rce,requestCacheEvictions;default:false;text-align:right;desc:request cache evictions\");\n+ table.addCell(\"request_cache.hit_count\",\n+ \"alias:rchc,requestCacheHitCount;default:false;text-align:right;desc:request cache hit counts\");\n+ table.addCell(\"request_cache.miss_count\",\n+ \"alias:rcmc,requestCacheMissCount;default:false;text-align:right;desc:request cache miss counts\");\n \n table.addCell(\"flush.total\", \"alias:ft,flushTotal;default:false;text-align:right;desc:number of flushes\");\n table.addCell(\"flush.total_time\", \"alias:ftt,flushTotalTime;default:false;text-align:right;desc:time spent in flush\");\n@@ -165,16 +171,20 @@ protected Table getTableWithHeader(final RestRequest request) {\n table.addCell(\"get.missing_time\", \"alias:gmti,getMissingTime;default:false;text-align:right;desc:time spent in failed gets\");\n table.addCell(\"get.missing_total\", \"alias:gmto,getMissingTotal;default:false;text-align:right;desc:number of failed gets\");\n \n- table.addCell(\"indexing.delete_current\", \"alias:idc,indexingDeleteCurrent;default:false;text-align:right;desc:number of current deletions\");\n+ table.addCell(\"indexing.delete_current\",\n+ \"alias:idc,indexingDeleteCurrent;default:false;text-align:right;desc:number of current deletions\");\n table.addCell(\"indexing.delete_time\", \"alias:idti,indexingDeleteTime;default:false;text-align:right;desc:time spent in deletions\");\n table.addCell(\"indexing.delete_total\", \"alias:idto,indexingDeleteTotal;default:false;text-align:right;desc:number of delete ops\");\n- table.addCell(\"indexing.index_current\", \"alias:iic,indexingIndexCurrent;default:false;text-align:right;desc:number of current indexing ops\");\n+ table.addCell(\"indexing.index_current\",\n+ \"alias:iic,indexingIndexCurrent;default:false;text-align:right;desc:number of current indexing ops\");\n table.addCell(\"indexing.index_time\", \"alias:iiti,indexingIndexTime;default:false;text-align:right;desc:time spent in indexing\");\n table.addCell(\"indexing.index_total\", \"alias:iito,indexingIndexTotal;default:false;text-align:right;desc:number of indexing ops\");\n- table.addCell(\"indexing.index_failed\", \"alias:iif,indexingIndexFailed;default:false;text-align:right;desc:number of failed indexing ops\");\n+ table.addCell(\"indexing.index_failed\",\n+ \"alias:iif,indexingIndexFailed;default:false;text-align:right;desc:number of failed indexing ops\");\n \n table.addCell(\"merges.current\", \"alias:mc,mergesCurrent;default:false;text-align:right;desc:number of current merges\");\n- table.addCell(\"merges.current_docs\", \"alias:mcd,mergesCurrentDocs;default:false;text-align:right;desc:number of current merging docs\");\n+ table.addCell(\"merges.current_docs\",\n+ \"alias:mcd,mergesCurrentDocs;default:false;text-align:right;desc:number of current merging docs\");\n table.addCell(\"merges.current_size\", \"alias:mcs,mergesCurrentSize;default:false;text-align:right;desc:size of current merges\");\n table.addCell(\"merges.total\", \"alias:mt,mergesTotal;default:false;text-align:right;desc:number of completed merge ops\");\n table.addCell(\"merges.total_docs\", \"alias:mtd,mergesTotalDocs;default:false;text-align:right;desc:docs merged\");\n@@ -185,7 +195,8 @@ protected Table getTableWithHeader(final RestRequest request) {\n table.addCell(\"refresh.time\", \"alias:rti,refreshTime;default:false;text-align:right;desc:time spent in refreshes\");\n \n table.addCell(\"script.compilations\", \"alias:scrcc,scriptCompilations;default:false;text-align:right;desc:script compilations\");\n- table.addCell(\"script.cache_evictions\", \"alias:scrce,scriptCacheEvictions;default:false;text-align:right;desc:script cache evictions\");\n+ table.addCell(\"script.cache_evictions\",\n+ \"alias:scrce,scriptCacheEvictions;default:false;text-align:right;desc:script cache evictions\");\n \n table.addCell(\"search.fetch_current\", \"alias:sfc,searchFetchCurrent;default:false;text-align:right;desc:current fetch phase ops\");\n table.addCell(\"search.fetch_time\", \"alias:sfti,searchFetchTime;default:false;text-align:right;desc:time spent in fetch phase\");\n@@ -195,14 +206,19 @@ protected Table getTableWithHeader(final RestRequest request) {\n table.addCell(\"search.query_time\", \"alias:sqti,searchQueryTime;default:false;text-align:right;desc:time spent in query phase\");\n table.addCell(\"search.query_total\", \"alias:sqto,searchQueryTotal;default:false;text-align:right;desc:total query phase ops\");\n table.addCell(\"search.scroll_current\", \"alias:scc,searchScrollCurrent;default:false;text-align:right;desc:open scroll contexts\");\n- table.addCell(\"search.scroll_time\", \"alias:scti,searchScrollTime;default:false;text-align:right;desc:time scroll contexts held open\");\n+ table.addCell(\"search.scroll_time\",\n+ \"alias:scti,searchScrollTime;default:false;text-align:right;desc:time scroll contexts held open\");\n table.addCell(\"search.scroll_total\", \"alias:scto,searchScrollTotal;default:false;text-align:right;desc:completed scroll contexts\");\n \n table.addCell(\"segments.count\", \"alias:sc,segmentsCount;default:false;text-align:right;desc:number of segments\");\n table.addCell(\"segments.memory\", \"alias:sm,segmentsMemory;default:false;text-align:right;desc:memory used by segments\");\n- table.addCell(\"segments.index_writer_memory\", \"alias:siwm,segmentsIndexWriterMemory;default:false;text-align:right;desc:memory used by index writer\");\n- table.addCell(\"segments.version_map_memory\", \"alias:svmm,segmentsVersionMapMemory;default:false;text-align:right;desc:memory used by version map\");\n- table.addCell(\"segments.fixed_bitset_memory\", \"alias:sfbm,fixedBitsetMemory;default:false;text-align:right;desc:memory used by fixed bit sets for nested object field types and type filters for types referred in _parent fields\");\n+ table.addCell(\"segments.index_writer_memory\",\n+ \"alias:siwm,segmentsIndexWriterMemory;default:false;text-align:right;desc:memory used by index writer\");\n+ table.addCell(\"segments.version_map_memory\",\n+ \"alias:svmm,segmentsVersionMapMemory;default:false;text-align:right;desc:memory used by version map\");\n+ table.addCell(\"segments.fixed_bitset_memory\",\n+ \"alias:sfbm,fixedBitsetMemory;default:false;text-align:right;desc:memory used by fixed bit sets for nested object field types\" +\n+ \" and type filters for types referred in _parent fields\");\n \n table.addCell(\"suggest.current\", \"alias:suc,suggestCurrent;default:false;text-align:right;desc:number of current suggest ops\");\n table.addCell(\"suggest.time\", \"alias:suti,suggestTime;default:false;text-align:right;desc:time spend in suggest\");\n@@ -212,8 +228,8 @@ protected Table getTableWithHeader(final RestRequest request) {\n return table;\n }\n \n- private Table buildTable(RestRequest req, ClusterStateResponse state, NodesInfoResponse nodesInfo, NodesStatsResponse nodesStats) {\n- boolean fullId = req.paramAsBoolean(\"full_id\", false);\n+ private Table buildTable(boolean fullId, RestRequest req, ClusterStateResponse state, NodesInfoResponse nodesInfo,\n+ NodesStatsResponse nodesStats) {\n \n DiscoveryNodes nodes = state.getState().nodes();\n String masterId = nodes.getMasterNodeId();\n@@ -255,14 +271,18 @@ private Table buildTable(RestRequest req, ClusterStateResponse state, NodesInfoR\n table.addCell(osStats == null ? null : osStats.getMem() == null ? null : osStats.getMem().getUsedPercent());\n table.addCell(osStats == null ? null : osStats.getMem() == null ? null : osStats.getMem().getTotal());\n table.addCell(processStats == null ? null : processStats.getOpenFileDescriptors());\n- table.addCell(processStats == null ? null : calculatePercentage(processStats.getOpenFileDescriptors(), processStats.getMaxFileDescriptors()));\n+ table.addCell(processStats == null ? null : calculatePercentage(processStats.getOpenFileDescriptors(),\n+ processStats.getMaxFileDescriptors()));\n table.addCell(processStats == null ? null : processStats.getMaxFileDescriptors());\n \n table.addCell(osStats == null ? null : Short.toString(osStats.getCpu().getPercent()));\n boolean hasLoadAverage = osStats != null && osStats.getCpu().getLoadAverage() != null;\n- table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[0] == -1 ? null : String.format(Locale.ROOT, \"%.2f\", osStats.getCpu().getLoadAverage()[0]));\n- table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[1] == -1 ? null : String.format(Locale.ROOT, \"%.2f\", osStats.getCpu().getLoadAverage()[1]));\n- table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[2] == -1 ? null : String.format(Locale.ROOT, \"%.2f\", osStats.getCpu().getLoadAverage()[2]));\n+ table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[0] == -1 ? null :\n+ String.format(Locale.ROOT, \"%.2f\", osStats.getCpu().getLoadAverage()[0]));\n+ table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[1] == -1 ? null :\n+ String.format(Locale.ROOT, \"%.2f\", osStats.getCpu().getLoadAverage()[1]));\n+ table.addCell(!hasLoadAverage || osStats.getCpu().getLoadAverage()[2] == -1 ? null :\n+ String.format(Locale.ROOT, \"%.2f\", osStats.getCpu().getLoadAverage()[2]));\n table.addCell(jvmStats == null ? null : jvmStats.getUptime());\n \n final String roles;", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java", "status": "modified" }, { "diff": "@@ -12,6 +12,10 @@\n \"type\" : \"string\",\n \"description\" : \"a short version of the Accept header, e.g. json, yaml\"\n },\n+ \"full_id\": {\n+ \"type\" : \"boolean\",\n+ \"description\" : \"Return the full node ID instead of the shortened version (default: false)\"\n+ },\n \"local\": {\n \"type\" : \"boolean\",\n \"description\" : \"Return local information, do not retrieve the state from master node (default: false)\"", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/cat.nodes.json", "status": "modified" }, { "diff": "@@ -57,3 +57,28 @@\n - match:\n $body: |\n /^ http \\n ((\\d{1,3}\\.){3}\\d{1,3}:\\d{1,5}\\n)+ $/\n+\n+---\n+\"Test cat nodes output with full_id set\":\n+ - skip:\n+ version: \" - 5.0.0\"\n+ reason: The full_id setting was rejected in 5.0.0 see #21266\n+\n+\n+ - do:\n+ cat.nodes:\n+ h: id\n+ # check for a 4 char non-whitespace character string\n+ - match:\n+ $body: |\n+ /^(\\S{4}\\n)+$/\n+\n+ - do:\n+ cat.nodes:\n+ h: id\n+ full_id: true\n+ # check for a 5+ char non-whitespace character string\n+ - match:\n+ $body: |\n+ /^(\\S{5,}\\n)+$/\n+", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/cat.nodes/10_basic.yaml", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.0.0\n\n**OS version**: OSX\n\n**Description of the problem including expected versus actual behavior**:\nA poorly configured high-shard-count 2.4 index can prevent 5.0 from starting with little method for recovery\n\n**Steps to reproduce**:\n1. Start 2.4 node\n2. `PUT /foo -d '{ \"settings\" : { \"number_of_shards\" : 2000 } }'`\n3. Migrate 2.4 data path to 5.0\n4. Attempt to start Elasticsearch\n\n**Provide logs (if relevant)**:\nhttps://gist.github.com/eskibars/78ef933939e0052db3730bd6a22f3eb9\n\nAttempting to bypass via `/bin/elasticsearch -Eindex.max_number_of_shards=5000` doesn't allow elasticsearch to set up, since this is an index setting that can't be set at the node level.\n\nUnfortunately, the result of this can leave your cluster in an unrecoverable state. Consider the following scenario:\n1. Start 2.4 node\n2. `PUT /abc`\n3. `PUT /abc/t/1 -d '{ \"foo\": \"bar\" }'`\n4. `PUT /foo -d '{ \"settings\" : { \"number_of_shards\" : 2000 } }'`\n5. Stop 2.4 node\n6. Migrate 2.4 data path to 5.0\n7. Attempt to start Elasticsearch. Failure comes as described above\n\nNow you're in trouble unless you took snapshots. The shard check/exit that Elasticsearch does in (7) came _after_ Elasticsearch 5.0 renamed your `abc` index/directory, so if you try to revert back to 2.4 at this point, it's treated as a dangling index and `/abc/_search` produces 0 results\n", "comments": [ { "body": "first, I am not sure this is a bug, if you wanna run an index with that many shards you gotta set it up accordingly. Yet, I think we need to throw better exception.\n\n> Now you're in trouble unless you took snapshots. The shard check/exit that Elasticsearch does in (7) came after Elasticsearch 5.0 renamed your abc index/directory, so if you try to revert back to 2.4 at this point, it's treated as a dangling index and /abc/_search produces 0 results\n\nvery misleading comment almost reckless? The node must be started with `export ES_JAVA_OPTS=\"-Des.index.max_number_of_shards=128”` as documented [here](https://www.elastic.co/guide/en/elasticsearch/reference/master/index-modules.html)\n\nthat said I am not sure we should auto-upgrade old indices since it is just asking for trouble down the road unless the setting is set. I think we need to do this differently and provide a different error message if there is an index in the cluster that has `too many shards`\n", "created_at": "2016-10-25T14:56:30Z" }, { "body": "If you can work around this by adding `-Des.index.max_number_of_shards= 2000` to ES_JAVA_OPTS I think we're good so long as we make sure to tell people that in the startup failure _and_ we should tell them exactly why it is a bad idea. That way their upgrade will be exciting rather than a catastrophe.\n", "created_at": "2016-10-25T15:15:14Z" }, { "body": "@nik9000 I don't think it's a showstopper, we can target this for 5.0.1\n", "created_at": "2016-10-25T15:19:49Z" }, { "body": "++\n", "created_at": "2016-10-25T15:20:26Z" }, { "body": "+1 for this not being a blocker. The impact in real life is likely minimal. I opened https://github.com/elastic/elasticsearch-migration/issues/83 so the migration plugin will tell you about this.\n", "created_at": "2016-10-25T16:18:56Z" }, { "body": "https://github.com/elastic/elasticsearch-migration/issues/83 was fixed by @eskibars and we released a new version of the migration plugin with the fix\n", "created_at": "2016-10-25T19:59:59Z" }, { "body": "I believe this is still an issue and the PR #21269 should be revived to fix it.", "created_at": "2018-03-14T07:54:57Z" }, { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-03-14T07:55:34Z" }, { "body": "This issue describes a specific problem with 2.4->5.x upgrades, for which there is a warning in the 2.4 migration assistant. Both of the versions in question have long since passed their EOL date, and as such, we are not going to address this issue with any code changes. As such, I'm going to close this issue.\r\n\r\nIf we believe there are remaining issues with inappropriately archiving index settings, that should be raised as a new issue, with details as relate to a maintained version of Elasticsearch.", "created_at": "2020-12-03T23:15:45Z" } ], "number": 21114, "title": "High shard count in an index prevents upgrading to 5.0.0" }
{ "body": "We have some settings like `index.number_of_shards` and `index.number_of_replicas`\r\nthat should never be archived if they are invalid. If such a setting is invalid\r\nthe node should not start-up instead. This change throws an IllegalStateException\r\nif such a setting can't be parsed.\r\n\r\nCloses #21114", "number": 21269, "review_comments": [ { "body": "Nit: I think we need to renamed the method + docs - it now also rejects mandatory and invalid settings.\n", "created_at": "2016-11-02T14:48:49Z" }, { "body": "nit: how about: \"setting [..] is invalid and mandatory\"?\n", "created_at": "2016-11-02T14:53:58Z" }, { "body": "Also- I traced the code and as far as I can tell, in the case of `index.number_of_shards` we never say which index was problematic? I think it's important to add that info.. \n", "created_at": "2016-11-02T14:55:12Z" }, { "body": "> setting [..] is invalid and mandatory\n\nmakes no sense to me, it's an illegal state because X and X is that a mandatory setting can't be archived. it's caused by Y \n", "created_at": "2016-11-02T15:18:35Z" }, { "body": "People don't know what archiving setting means. They just upgrade and their end up in an illegal state. I think it will be cleared not to mention the \"can't be archived\" part. But as said - just a nit, not a big deal.\n", "created_at": "2016-11-02T16:26:00Z" }, { "body": "shouldn't we add the information to the exception?\n", "created_at": "2016-11-02T16:27:10Z" }, { "body": "no we barf and fail the startup why log it more than once?\n", "created_at": "2016-11-02T20:09:24Z" }, { "body": "yeah no\n", "created_at": "2016-11-02T20:09:38Z" }, { "body": "Below is what I get when I try it out. As you can see that log message is drowned in many other log messages that don't mention the index name. A lot of this is guice and we're working on fixing it, but I think the easiest is to make sure that the index name is mentioned in the exception for now? people won't see it otherwise.\n\n```\n[2016-11-03T00:08:29,112][ERROR][o.e.g.GatewayMetaState ] [ePegTxb] failed to read local state, exiting...\njava.lang.IllegalStateException: can't archive mandatory setting [index.number_of_shards]\n at org.elasticsearch.common.settings.AbstractScopedSettings.archiveUnknownOrInvalidSettings(AbstractScopedSettings.java:528) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\n\n... a terminal screen worth of stack trace here ...\n\nCaused by: java.lang.IllegalArgumentException: Failed to parse value [2000] for setting [index.number_of_shards] must be <= 1024\n at org.elasticsearch.common.settings.Setting.parseInt(Setting.java:533) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\n at org.elasticsearch.common.settings.Setting.lambda$intSetting$12(Setting.java:504) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\n\n....\n\n[2016-11-03T00:08:29,144][ERROR][o.e.c.m.MetaDataIndexUpgradeService] [ePegTxb] [foo/TYJyxhVDRjGIMDrHomQciw] failed to process index settings: can't archive mandatory setting [index.number_of_shards]\n[2016-11-03T00:08:29,146][ERROR][o.e.g.GatewayMetaState ] [ePegTxb] failed to read local state, exiting...\njava.lang.IllegalStateException: can't archive mandatory setting [index.number_of_shards]\n at org.elasticsearch.common.settings.AbstractScopedSettings.archiveUnknownOrInvalidSettings(AbstractScopedSettings.java:528) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\n\n... another terminal screen worth of output\n\n[2016-11-03T00:08:29,161][ERROR][o.e.c.m.MetaDataIndexUpgradeService] [ePegTxb] [foo/TYJyxhVDRjGIMDrHomQciw] failed to process index settings: can't archive mandatory setting [index.number_of_shards]\n[2016-11-03T00:08:29,161][ERROR][o.e.g.GatewayMetaState ] [ePegTxb] failed to read local state, exiting...\njava.lang.IllegalStateException: can't archive mandatory setting [index.number_of_shards]\n at org.elasticsearch.common.settings.AbstractScopedSettings.archiveUnknownOrInvalidSettings(AbstractScopedSettings.java:528) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.archiveBrokenIndexSettings(MetaDataIndexUpgradeService.java:171) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:81) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\n\n... another terminal\n\nCaused by: java.lang.IllegalArgumentException: Failed to parse value [2000] for setting [index.number_of_shards] must be <= 1024\n at org.elasticsearch.common.settings.Setting.parseInt(Setting.java:533) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\n at org.elasticsearch.common.settings.Setting.lambda$intSetting$12(Setting.java:504) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\n\n[2016-11-03T00:08:29,229][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]\norg.elasticsearch.bootstrap.StartupException: org.elasticsearch.common.inject.CreationException: Guice creation errors:\n\n1) Error injecting constructor, java.lang.IllegalStateException: can't archive mandatory setting [index.number_of_shards] <--- THESE IS REPEATED 4 times\n at org.elasticsearch.gateway.GatewayMetaState.<init>(Unknown Source)\n while locating org.elasticsearch.gateway.GatewayMetaState\n for parameter 4 at org.elasticsearch.gateway.GatewayService.<init>(Unknown Source)\n while locating org.elasticsearch.gateway.GatewayService \n\n... another terminal, this time full of guice information\n\n\nCaused by: java.lang.IllegalArgumentException: Failed to parse value [2000] for setting [index.number_of_shards] must be <= 1024\n at org.elasticsearch.common.settings.Setting.parseInt(Setting.java:533)\n at org.elasticsearch.common.settings.Setting.lambda$intSetting$12(Setting.java:504)\n at org.elasticsearch.common.settings.Setting$$Lambda$183/2092885124.apply(Unknown Source)\n at org.elasticsearch.common.settings.Setting.get(Setting.java:312)\n at org.elasticsearch.common.settings.AbstractScopedSettings.archiveUnknownOrInvalidSettings(AbstractScopedSettings.java:525)\n ... 47 more\n\n\nAnd this ^^^ is the last message on the screen.\n\n```\n", "created_at": "2016-11-03T07:35:27Z" }, { "body": "this would be quite a bit of refactoring and the settings validation is not index specific. I am very close to just close this issue and leave it as it is. It's such a corner case and all I tried to prevent is failing late. Even further the bloody index isn't even important you gotta set the node limit anyway... \n", "created_at": "2016-11-03T08:17:35Z" } ], "title": "Never archive mandatory settings" }
{ "commits": [ { "message": "Never archive mandatory settings\n\nWe have some settings like `index.number_of_shards` and `index.number_of_replicas`\nthat should never be archived if they are invalid. If such a setting is invalid\nthe node should not start-up instead. This change throws an IllegalStateException\nif such a settingsc can't be parsed.\n\nCloses #21114" }, { "message": "review feedback" } ], "files": [ { "diff": "@@ -165,15 +165,15 @@ static Setting<Integer> buildNumberOfShardsSetting() {\n throw new IllegalArgumentException(\"es.index.max_number_of_shards must be > 0\");\n }\n return Setting.intSetting(SETTING_NUMBER_OF_SHARDS, Math.min(5, maxNumShards), 1, maxNumShards,\n- Property.IndexScope);\n+ Property.IndexScope, Property.Mandatory);\n }\n \n public static final String INDEX_SETTING_PREFIX = \"index.\";\n public static final String SETTING_NUMBER_OF_SHARDS = \"index.number_of_shards\";\n public static final Setting<Integer> INDEX_NUMBER_OF_SHARDS_SETTING = buildNumberOfShardsSetting();\n public static final String SETTING_NUMBER_OF_REPLICAS = \"index.number_of_replicas\";\n public static final Setting<Integer> INDEX_NUMBER_OF_REPLICAS_SETTING =\n- Setting.intSetting(SETTING_NUMBER_OF_REPLICAS, 1, 0, Property.Dynamic, Property.IndexScope);\n+ Setting.intSetting(SETTING_NUMBER_OF_REPLICAS, 1, 0, Property.Dynamic, Property.IndexScope, Property.Mandatory);\n public static final String SETTING_SHADOW_REPLICAS = \"index.shadow_replicas\";\n public static final Setting<Boolean> INDEX_SHADOW_REPLICAS_SETTING =\n Setting.boolSetting(SETTING_SHADOW_REPLICAS, false, Property.IndexScope);", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java", "status": "modified" }, { "diff": "@@ -77,7 +77,12 @@ public IndexMetaData upgradeIndexMetaData(IndexMetaData indexMetaData) {\n IndexMetaData newMetaData = indexMetaData;\n // we have to run this first otherwise in we try to create IndexSettings\n // with broken settings and fail in checkMappingsCompatibility\n- newMetaData = archiveBrokenIndexSettings(newMetaData);\n+ try {\n+ newMetaData = archiveBrokenIndexSettings(newMetaData);\n+ } catch (Exception ex) {\n+ logger.error(\"{} failed to process index settings: {}\", newMetaData.getIndex(), ex.getMessage());\n+ throw ex;\n+ }\n // only run the check with the upgraded settings!!\n checkMappingsCompatibility(newMetaData);\n return markAsUpgraded(newMetaData);", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java", "status": "modified" }, { "diff": "@@ -507,6 +507,8 @@ private static Setting<?> findOverlappingSetting(Setting<?> newSetting, Map<Stri\n * associated value)\n * @param invalidConsumer callback on invalid settings (consumer receives invalid key, its\n * associated value and an exception)\n+ * @throws IllegalStateException if an {@link org.elasticsearch.common.settings.Setting.Property#Mandatory} setting must be archived\n+ *\n * @return a {@link Settings} instance with the unknown or invalid settings archived\n */\n public Settings archiveUnknownOrInvalidSettings(\n@@ -519,7 +521,14 @@ public Settings archiveUnknownOrInvalidSettings(\n try {\n Setting<?> setting = get(entry.getKey());\n if (setting != null) {\n- setting.get(settings);\n+ try {\n+ setting.get(settings);\n+ } catch (IllegalArgumentException ex) {\n+ if (setting.isMandatory()) {\n+ throw new IllegalStateException(\"can't archive mandatory setting [\" + setting.getKey() + \"]\", ex);\n+ }\n+ throw ex;\n+ }\n builder.put(entry.getKey(), entry.getValue());\n } else {\n if (entry.getKey().startsWith(ARCHIVED_SETTINGS_PREFIX) || isPrivateSetting(entry.getKey())) {", "filename": "core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java", "status": "modified" }, { "diff": "@@ -104,7 +104,11 @@ public enum Property {\n /**\n * Index scope\n */\n- IndexScope\n+ IndexScope,\n+ /**\n+ * Mandatory settings ie. index.number_of_shards\n+ */\n+ Mandatory\n }\n \n private final Key key;\n@@ -256,6 +260,12 @@ public boolean isShared() {\n return properties.contains(Property.Shared);\n }\n \n+ /**\n+ * Returns <code>true</code> if this setting is a mandatory setting or in other words settings without this particular setting are\n+ * invalid, otherwise <code>false</code>\n+ */\n+ public boolean isMandatory() { return properties.contains(Property.Mandatory);}\n+\n /**\n * Returns <code>true</code> iff this setting is a group setting. Group settings represent a set of settings rather than a single value.\n * The key, see {@link #getKey()}, in contrast to non-group settings is a prefix like <tt>cluster.store.</tt> that matches all settings", "filename": "core/src/main/java/org/elasticsearch/common/settings/Setting.java", "status": "modified" }, { "diff": "@@ -392,6 +392,18 @@ public void testArchiveBrokenIndexSettings() {\n assertEquals(\"foo\", settings.get(\"archived.index.unknown\"));\n assertEquals(Integer.toString(Version.CURRENT.id), settings.get(\"index.version.created\"));\n assertEquals(\"2s\", settings.get(\"index.refresh_interval\"));\n+\n+ IllegalStateException failure = expectThrows(IllegalStateException.class, () ->\n+ IndexScopedSettings.DEFAULT_SCOPED_SETTINGS.archiveUnknownOrInvalidSettings(\n+ Settings.builder()\n+ .put(\"index.version.created\", Version.CURRENT.id) // private setting\n+ .put(\"index.number_of_shards\", Integer.MAX_VALUE)\n+ .put(\"index.refresh_interval\", \"2s\").build(),\n+ e -> { fail(\"no invalid setting expected but got: \" + e);},\n+ (e, ex) -> fail(\"should not have been invoked, no invalid settings\")));\n+ assertEquals(\"can't archive mandatory setting [index.number_of_shards]\", failure.getMessage());\n+ assertEquals(\"Failed to parse value [2147483647] for setting [index.number_of_shards] must be <= 1024\",\n+ failure.getCause().getMessage());\n }\n \n }", "filename": "core/src/test/java/org/elasticsearch/index/IndexSettingsTests.java", "status": "modified" } ] }
{ "body": "I found this while going through snippets that aren't marked `// CONSOLE`. In this case the [this](https://github.com/elastic/elasticsearch/blame/master/docs/reference/search/suggesters/completion-suggest.asciidoc#L201-L222) snippet looks to suggest something that isn't implemented.\n", "comments": [ { "body": "I know @jimferenczi's been working on source filtering lately but the completion suggester is @areek's baby. It looks like this is simply a case of the parsing not being implemented which isn't super difficult to fix so I expect anyone can take it.\n", "created_at": "2016-09-14T15:09:56Z" }, { "body": "I can also take this if everyone else is busy!\n", "created_at": "2016-09-14T15:44:42Z" }, { "body": "I don't think it is worth implementing this since the _suggest endpoint will be deprecated in 5.0 (https://github.com/elastic/elasticsearch/pull/20435) and that it works with the _search API ?\n", "created_at": "2016-09-14T16:38:34Z" }, { "body": "Perfect! So long as we use the _search API in the docs we're all good then.\n", "created_at": "2016-09-14T16:44:58Z" }, { "body": "@nik9000 Completion Suggester documentation still says `_suggest` supports `_source` filtering.\n\nSince `_suggest` endpoint still exists in 5.0, I think source filtering should be included as per the current documentation, to be consistent with suggestions when using the `_search` endpoint.\n", "created_at": "2016-11-02T08:26:24Z" }, { "body": "Yeah, if we're going to remove it in 6.0 we don't need docks there but we should fix the docs in the 5.0 and 5.x branch. @russcam, are you willing to open a PR to do it?\n", "created_at": "2016-11-02T13:36:57Z" }, { "body": "> Yeah, if we're going to remove it in 6.0 we don't need docks there but we should fix the docs in the 5.0 and 5.x branch. @russcam, are you willing to open a PR to do it?\n\nIgnore that. I've found a moment so I'll just do it now.\n", "created_at": "2016-11-02T14:07:43Z" }, { "body": "@russcam I opened #21268.\n", "created_at": "2016-11-02T14:36:49Z" } ], "number": 20482, "title": "Completion suggester docs claim `_source` filters the source but this isn't implemented" }
{ "body": "We plan to deprecate `_suggest` during 5.0 so it isn't worth fixing\r\nit to support the `_source` parameter for `_source` filtering. But we\r\nshould fix the docs so they are accurate.\r\n\r\nSince this removes the last non-`// CONSOLE` line in\r\n`completion-suggest.asciidoc` this also removes it from the list of\r\nfiles that have non-`// CONSOLE` docs.\r\n\r\nCloses #20482", "number": 21268, "review_comments": [ { "body": "Shouldn't the query above use `size: 0` so no hits are expected?\n", "created_at": "2016-11-02T15:22:07Z" }, { "body": "> Note that the `_suggest` endpoint doesn't support this\n\nMaybe something like\n\nNote that the `_suggest` endpoint doesn't support source filtering but using `suggest` on the `_search` endpoint does.\n", "created_at": "2016-11-03T06:59:48Z" }, { "body": "Yes, dashed this off too quickly. Will fix.\n", "created_at": "2016-11-03T13:50:59Z" }, { "body": "That sounds better. I'll amend.\n", "created_at": "2016-11-03T13:51:32Z" } ], "title": "Make it clear _suggest doesn't support source filtering" }
{ "commits": [ { "message": "Make it clear _suggest doesn't support source filtering\n\nWe plan to deprecate `_suggest` during 5.0 so it isn't worth fixing\nit to support the `_source` parameter for `_source` filtering. But we\nshould fix the docs so they are accurate.\n\nSince this removes the last non-`// CONSOLE` line in\n`completion-suggest.asciidoc` this also removes it from the list of\nfiles that have non-`// CONSOLE` docs.\n\nCloses #20482" }, { "message": "Changes from review" } ], "files": [ { "diff": "@@ -156,7 +156,6 @@ buildRestTests.expectedUnconvertedCandidates = [\n 'reference/search/request/inner-hits.asciidoc',\n 'reference/search/request/rescore.asciidoc',\n 'reference/search/search-template.asciidoc',\n- 'reference/search/suggesters/completion-suggest.asciidoc',\n ]\n \n integTest {", "filename": "docs/build.gradle", "status": "modified" }, { "diff": "@@ -202,24 +202,66 @@ The configured weight for a suggestion is returned as `_score`. The\n `text` field uses the `input` of your indexed suggestion. Suggestions\n return the full document `_source` by default. The size of the `_source`\n can impact performance due to disk fetch and network transport overhead.\n-For best performance, filter out unnecessary fields from the `_source`\n+To save some network overhead, filter out unnecessary fields from the `_source`\n using <<search-request-source-filtering, source filtering>> to minimize\n-`_source` size. The following demonstrates an example completion query\n-with source filtering:\n+`_source` size. Note that the _suggest endpoint doesn't support source\n+filtering but using suggest on the `_search` endpoint does:\n \n [source,js]\n --------------------------------------------------\n-POST music/_suggest\n+POST music/_search?size=0\n {\n- \"_source\": \"completion.*\",\n- \"song-suggest\" : {\n- \"prefix\" : \"nir\",\n- \"completion\" : {\n- \"field\" : \"suggest\"\n+ \"_source\": \"suggest\",\n+ \"suggest\": {\n+ \"song-suggest\" : {\n+ \"prefix\" : \"nir\",\n+ \"completion\" : {\n+ \"field\" : \"suggest\"\n+ }\n }\n }\n }\n --------------------------------------------------\n+// CONSOLE\n+// TEST[continued]\n+\n+Which should look like:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"took\": 6,\n+ \"timed_out\": false,\n+ \"_shards\" : {\n+ \"total\" : 5,\n+ \"successful\" : 5,\n+ \"failed\" : 0\n+ },\n+ \"hits\": {\n+ \"total\" : 0,\n+ \"max_score\" : 0.0,\n+ \"hits\" : []\n+ },\n+ \"suggest\": {\n+ \"song-suggest\" : [ {\n+ \"text\" : \"nir\",\n+ \"offset\" : 0,\n+ \"length\" : 3,\n+ \"options\" : [ {\n+ \"text\" : \"Nirvana\",\n+ \"_index\": \"music\",\n+ \"_type\": \"song\",\n+ \"_id\": \"1\",\n+ \"_score\": 1.0,\n+ \"_source\": {\n+ \"suggest\": [\"Nevermind\", \"Nirvana\"]\n+ }\n+ } ]\n+ } ]\n+ }\n+}\n+--------------------------------------------------\n+// TESTRESPONSE[s/\"took\": 6,/\"took\": $body.took,/]\n \n The basic completion suggester query supports the following parameters:\n ", "filename": "docs/reference/search/suggesters/completion-suggest.asciidoc", "status": "modified" } ] }
{ "body": "The Maven pom definitions generated for ES v5.0.0+ use \\* excludes to prevent transitive dependencies. This breaks compatibility with the dependency manager Apache Ivy as it incorrectly translates POMs with * excludes to Ivy XML with * excludes which results in the main artifact being excluded as well (see https://issues.apache.org/jira/browse/IVY-1531). This subsequently breaks the consumption of these dependencies from SBT (has a fix for 0.13.3 though), Grape, and other software that uses Ivy to resolve dependencies.\r\n\r\nIn Ivy the \\* excludes of the form\r\n\r\n```\r\n<dependencies>\r\n <dependency>\r\n <groupId>io.netty</groupId>\r\n <artifactId>netty-buffer</artifactId>\r\n <version>4.1.5.Final</version>\r\n <scope>compile</scope>\r\n <exclusions>\r\n <exclusion>\r\n <groupId>*</groupId>\r\n <artifactId>*</artifactId>\r\n </exclusion>\r\n </exclusions>\r\n </dependency>\r\n ...\r\n```\r\n\r\nexclude the artifacts of `netty-buffer` itself as well.\r\n\r\nThis can be shown by running\r\n\r\n```\r\nivy -confs default -dependency org.elasticsearch.plugin transport-netty4-client 5.0.0 -retrieve \"[artifact](-[classifier]).[ext]\"\r\n```\r\n\r\nwhich yields\r\n\r\n```\r\n:: loading settings :: url = jar:file:/usr/local/Cellar/ivy/2.4.0/libexec/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml\r\n:: resolving dependencies :: org.elasticsearch.plugin#transport-netty4-client-caller;working\r\n confs: [default]\r\n found org.elasticsearch.plugin#transport-netty4-client;5.0.0 in public\r\n found io.netty#netty-buffer;4.1.5.Final in public\r\n found io.netty#netty-codec;4.1.5.Final in public\r\n found io.netty#netty-codec-http;4.1.5.Final in public\r\n found io.netty#netty-common;4.1.5.Final in public\r\n found io.netty#netty-handler;4.1.5.Final in public\r\n found io.netty#netty-resolver;4.1.5.Final in public\r\n found io.netty#netty-transport;4.1.5.Final in public\r\n:: resolution report :: resolve 564ms :: artifacts dl 6ms\r\n ---------------------------------------------------------------------\r\n | | modules || artifacts |\r\n | conf | number| search|dwnlded|evicted|| number|dwnlded|\r\n ---------------------------------------------------------------------\r\n | default | 8 | 0 | 0 | 0 || 2 | 0 |\r\n ---------------------------------------------------------------------\r\n:: retrieving :: org.elasticsearch.plugin#transport-netty4-client-caller\r\n confs: [default]\r\n 2 artifacts copied, 0 already retrieved (740kB/23ms)\r\n```\r\n\r\ni.e. it just yields the following 2 artifacts:\r\n\r\n```\r\n$ ls\r\nnetty-common.jar\r\ntransport-netty4-client.jar\r\n```\r\n\r\nmissing `netty-buffer` and others which use the \\* excludes.\r\n", "comments": [ { "body": "I have the same issue with \"java.lang.NoClassDefFoundError: org/apache/logging/log4j/Logger\"\n", "created_at": "2016-10-28T21:06:45Z" }, { "body": "@retemaadi I think yours is a different issue. With ES 5.0.0 you need to add logging dependencies. Have a look here https://discuss.elastic.co/t/issue-with-elastic-search-5-0-0-noclassdeffounderror-org-apache-logging-log4j-logger/64262/1\n", "created_at": "2016-10-29T08:29:26Z" }, { "body": "@fabriziofortino Thanks for the link!\nHowever, after I added log4j, still I get class not found problem for Netty.\n", "created_at": "2016-10-31T08:26:15Z" }, { "body": "@retemaadi which build system are you using? Can you provide more details? Please create a new topic on https://discuss.elastic.co so that we can discuss your specific case there.\n", "created_at": "2016-10-31T08:36:40Z" }, { "body": "Supersedes #20926\n", "created_at": "2016-10-31T12:58:52Z" }, { "body": "I'd really like to get this into 5.0.1. Which means someone's gotta claim it soon!\n", "created_at": "2016-10-31T21:12:32Z" }, { "body": "I've created #21234 as possible fix. Please let me know what you think @nik9000 @rjernst.\n", "created_at": "2016-11-01T08:37:54Z" }, { "body": "@ywelsch I'm using SBT in a Scala project and I tried to upgrade from (\"org.elasticsearch\" % \"elasticsearch\" % \"2.4.0\") to (\"org.elasticsearch.client\" % \"transport\" % \"5.0.0\"), which resulted in the exact failure that you described here.\n\nI'm pretty sure, my issue will be resolved when this issue get fixed, do you still want me to create a new discussion topic?\n", "created_at": "2016-11-01T08:42:26Z" }, { "body": "@retemaadi Upgrading SBT to version 0.13.13 will also fix this (see https://github.com/sbt/sbt/pull/2731). No need to create a new discussion topic.\n", "created_at": "2016-11-01T08:47:41Z" }, { "body": "@ywelsch Thanks for the tip! I'll give it a try.\n", "created_at": "2016-11-01T08:52:41Z" }, { "body": "@ywelsch I tried it out with SBT 0.13.13 and still I get the error. Here is the stacktrace:\n\njava.lang.NoClassDefFoundError: io/netty/channel/RecvByteBufAllocator\n at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:39)\n at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:85)\n at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)\n at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)\n at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)\n at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)\n at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)\n at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)\n at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)\n at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:85)\n at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:115)\n at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:228)\n at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:69)\n at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:65)\n", "created_at": "2016-11-03T11:02:57Z" }, { "body": "@retemaadi you probably need to clean your ivy cache first if you previously tried with an older version of SBT: `rm -r ~/.ivy2/cache/*`\n", "created_at": "2016-11-03T11:37:29Z" }, { "body": "@ywelsch Thanks for your time, but still I see the same problem. So, I gave up! I'm going to wait for 5.0.1\n", "created_at": "2016-11-03T13:33:10Z" }, { "body": "@retemaadi I tried it and it works for me with SBT 0.13.13.\n\nMy `build.sbt` build file:\n\n```\nname := \"sbt-transport-example\"\n\nversion := \"1.0\"\n\nscalaVersion := \"2.12.0\"\n\nlibraryDependencies ++= Seq(\n \"org.elasticsearch\" % \"elasticsearch\" % \"5.0.0\",\n \"org.elasticsearch.client\" % \"transport\" % \"5.0.0\",\n \"org.apache.logging.log4j\" % \"log4j-api\" % \"2.7\",\n \"org.apache.logging.log4j\" % \"log4j-core\" % \"2.7\"\n)\n```\n\nMy Java class in `src/main/java`:\n\n```\nimport org.elasticsearch.client.transport.TransportClient;\nimport org.elasticsearch.common.settings.Settings;\nimport org.elasticsearch.common.transport.InetSocketTransportAddress;\nimport org.elasticsearch.transport.client.PreBuiltTransportClient;\n\nimport java.net.InetAddress;\nimport java.net.UnknownHostException;\n\npublic class Main {\n public static void main(String... args) throws UnknownHostException {\n TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)\n .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(\"localhost\"), 9300));\n System.out.println(client.admin().cluster().prepareState().get().getState().prettyPrint());\n client.close();\n }\n}\n```\n\nIf you cannot get it to work with SBT 0.13.13, please open an issue on `https://discuss.elastic.co` with additional information and ping me there (@ywelsch).\n", "created_at": "2016-11-03T17:33:20Z" }, { "body": "Closed by #21234\n", "created_at": "2016-11-04T09:44:21Z" }, { "body": "@ywelsch Thanks! It was the issue of bundled IntelliJ SBT.\n", "created_at": "2016-11-06T21:29:41Z" } ], "number": 21170, "title": "Ivy cannot properly resolve ES v5.0.0 dependencies" }
{ "body": "Dependencies are currently marked as non-transitive in generated POM files by adding a wildcard (*) exclusion. This breaks compatibility with the dependency manager Apache Ivy as it incorrectly translates POMs with * excludes to Ivy XML with * excludes which results in the main artifact being excluded as well (see https://issues.apache.org/jira/browse/IVY-1531). To stay compatible with the current release of Ivy this commit uses explicit excludes for each transitive artifact instead to ensure that the main artifact is not excluded. This should be revisited when we upgrade Gradle to a higher version as the current one (2.13) as Gradle automatically translates non-transitive dependencies to * excludes in 2.14+.\r\n\r\nRelates to #21170\r\n\r\nI've tested this patch as follows:\r\n\r\n1) applied patch to 5.0 branch and published artifacts to local maven repository by running `gradle publishToMavenLocal`\r\n2) checked various dependency management systems how they compare between the officially published 5.0.0 and the generated 5.0.1-SNAPSHOT version with the applied patch.\r\n\r\nDependency management systems:\r\n\r\n**Gradle**\r\n\r\nBuild file:\r\n\r\n```\r\napply plugin: 'java'\r\n\r\nrepositories {\r\n\tmavenLocal()\r\n\tmavenCentral()\r\n}\r\n\r\ndependencies {\r\n\tcompile 'org.elasticsearch:elasticsearch:5.0.0'\r\n}\r\n```\r\n\r\nRun `gradle dependencies --configuration compile` which yields\r\n\r\n```\r\n\\--- org.elasticsearch:elasticsearch:5.0.0\r\n +--- org.apache.lucene:lucene-core:6.2.0\r\n +--- org.apache.lucene:lucene-analyzers-common:6.2.0\r\n +--- org.apache.lucene:lucene-backward-codecs:6.2.0\r\n +--- org.apache.lucene:lucene-grouping:6.2.0\r\n +--- org.apache.lucene:lucene-highlighter:6.2.0\r\n +--- org.apache.lucene:lucene-join:6.2.0\r\n +--- org.apache.lucene:lucene-memory:6.2.0\r\n +--- org.apache.lucene:lucene-misc:6.2.0\r\n +--- org.apache.lucene:lucene-queries:6.2.0\r\n +--- org.apache.lucene:lucene-queryparser:6.2.0\r\n +--- org.apache.lucene:lucene-sandbox:6.2.0\r\n +--- org.apache.lucene:lucene-spatial:6.2.0\r\n +--- org.apache.lucene:lucene-spatial-extras:6.2.0\r\n +--- org.apache.lucene:lucene-spatial3d:6.2.0\r\n +--- org.apache.lucene:lucene-suggest:6.2.0\r\n +--- org.elasticsearch:securesm:1.1\r\n +--- net.sf.jopt-simple:jopt-simple:5.0.2\r\n +--- com.carrotsearch:hppc:0.7.1\r\n +--- joda-time:joda-time:2.9.4\r\n +--- org.joda:joda-convert:1.2\r\n +--- org.yaml:snakeyaml:1.15\r\n +--- com.fasterxml.jackson.core:jackson-core:2.8.1\r\n +--- com.fasterxml.jackson.dataformat:jackson-dataformat-smile:2.8.1\r\n +--- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.8.1\r\n +--- com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:2.8.1\r\n +--- com.tdunning:t-digest:3.0\r\n +--- org.hdrhistogram:HdrHistogram:2.1.6\r\n \\--- net.java.dev.jna:jna:4.2.2\r\n```\r\n\r\nReplace the 5.0.0 dependency by 5.0.1-SNAPSHOT in the build file and run same command again:\r\n\r\n```\r\n\\--- org.elasticsearch:elasticsearch:5.0.1-SNAPSHOT\r\n +--- org.apache.lucene:lucene-core:6.2.1\r\n +--- org.apache.lucene:lucene-analyzers-common:6.2.1\r\n +--- org.apache.lucene:lucene-backward-codecs:6.2.1\r\n +--- org.apache.lucene:lucene-grouping:6.2.1\r\n +--- org.apache.lucene:lucene-highlighter:6.2.1\r\n +--- org.apache.lucene:lucene-join:6.2.1\r\n +--- org.apache.lucene:lucene-memory:6.2.1\r\n +--- org.apache.lucene:lucene-misc:6.2.1\r\n +--- org.apache.lucene:lucene-queries:6.2.1\r\n +--- org.apache.lucene:lucene-queryparser:6.2.1\r\n +--- org.apache.lucene:lucene-sandbox:6.2.1\r\n +--- org.apache.lucene:lucene-spatial:6.2.1\r\n +--- org.apache.lucene:lucene-spatial-extras:6.2.1\r\n +--- org.apache.lucene:lucene-spatial3d:6.2.1\r\n +--- org.apache.lucene:lucene-suggest:6.2.1\r\n +--- org.elasticsearch:securesm:1.1\r\n +--- net.sf.jopt-simple:jopt-simple:5.0.2\r\n +--- com.carrotsearch:hppc:0.7.1\r\n +--- joda-time:joda-time:2.9.4\r\n +--- org.joda:joda-convert:1.2\r\n +--- org.yaml:snakeyaml:1.15\r\n +--- com.fasterxml.jackson.core:jackson-core:2.8.1\r\n +--- com.fasterxml.jackson.dataformat:jackson-dataformat-smile:2.8.1\r\n +--- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.8.1\r\n +--- com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:2.8.1\r\n +--- com.tdunning:t-digest:3.0\r\n +--- org.hdrhistogram:HdrHistogram:2.1.6\r\n \\--- net.java.dev.jna:jna:4.2.2\r\n```\r\n\r\nConclusion:\r\n\r\nGradle was handling the * excludes fine before and is still working as expected afterwards.\r\n\r\n\r\n**Ivy**\r\n\r\nCreate `ivysettings.xml` file with following contents:\r\n\r\n```\r\n<ivysettings>\r\n <settings defaultResolver=\"default\"/>\r\n <resolvers>\r\n <chain name=\"default\">\r\n <filesystem name=\"local-maven2\" m2compatible=\"true\" checkmodified=\"true\" changingPattern=\".*SNAPSHOT\"> \r\n <ivy pattern=\"${user.home}/.m2/repository/[organisation]/[module]/[revision]/[module]-[revision].pom\" />\r\n <artifact pattern=\"${user.home}/.m2/repository/[organisation]/[module]/[revision]/[artifact]-[revision](-[classifier]).[ext]\" />\r\n </filesystem>\r\n <ibiblio name=\"central\" m2compatible=\"true\"/>\r\n </chain>\r\n </resolvers>\r\n</ivysettings>\r\n```\r\n\r\nThen run\r\n`ivy -settings ivysettings.xml -confs default -dependency org.elasticsearch elasticsearch 5.0.0 -retrieve \"jars/[module]-[artifact](-[revision]).[ext]\"`\r\n\r\nwhich yields\r\n\r\n```\r\n\t---------------------------------------------------------------------\r\n\t| | modules || artifacts |\r\n\t| conf | number| search|dwnlded|evicted|| number|dwnlded|\r\n\t---------------------------------------------------------------------\r\n\t| default | 29 | 29 | 29 | 0 || 12 | 12 |\r\n\t---------------------------------------------------------------------\r\n```\r\n\r\nOnly 12 artifacts instead of the 29 that we expect.\r\n\r\n```\r\n$ ls jars\r\nHdrHistogram-HdrHistogram-2.1.6.jar jackson-core-jackson-core-2.8.1.jar joda-time-joda-time-2.9.4.jar securesm-securesm-1.1.jar\r\nelasticsearch-elasticsearch-5.0.0.jar jna-jna-4.2.2.jar jopt-simple-jopt-simple-5.0.2.jar snakeyaml-snakeyaml-1.15.jar\r\nhppc-hppc-0.7.1.jar joda-convert-joda-convert-1.2.jar lucene-core-lucene-core-6.2.0.jar t-digest-t-digest-3.0.jar\r\n```\r\n\r\nLet's repeat the same for 5.0.1-SNAPSHOT (but first `rm jars/*`):\r\n\r\n`ivy -settings ivysettings.xml -confs default -dependency org.elasticsearch elasticsearch 5.0.1-SNAPSHOT -retrieve \"jars/[module]-[artifact](-[revision]).[ext]\"`\r\n\r\nwhich yields:\r\n\r\n```\r\n\t---------------------------------------------------------------------\r\n\t| | modules || artifacts |\r\n\t| conf | number| search|dwnlded|evicted|| number|dwnlded|\r\n\t---------------------------------------------------------------------\r\n\t| default | 29 | 16 | 16 | 0 || 29 | 19 |\r\n\t---------------------------------------------------------------------\r\n```\r\n\r\nAll 29 artifacts are accounted for.\r\n\r\n```\r\n$ ls jars\r\nHdrHistogram-HdrHistogram-2.1.6.jar lucene-highlighter-lucene-highlighter-6.2.1.jar\r\nelasticsearch-elasticsearch-5.0.1-SNAPSHOT.jar lucene-join-lucene-join-6.2.1.jar\r\nhppc-hppc-0.7.1.jar lucene-memory-lucene-memory-6.2.1.jar\r\njackson-core-jackson-core-2.8.1.jar lucene-misc-lucene-misc-6.2.1.jar\r\njackson-dataformat-cbor-jackson-dataformat-cbor-2.8.1.jar lucene-queries-lucene-queries-6.2.1.jar\r\njackson-dataformat-smile-jackson-dataformat-smile-2.8.1.jar lucene-queryparser-lucene-queryparser-6.2.1.jar\r\njackson-dataformat-yaml-jackson-dataformat-yaml-2.8.1.jar lucene-sandbox-lucene-sandbox-6.2.1.jar\r\njna-jna-4.2.2.jar lucene-spatial-extras-lucene-spatial-extras-6.2.1.jar\r\njoda-convert-joda-convert-1.2.jar lucene-spatial-lucene-spatial-6.2.1.jar\r\njoda-time-joda-time-2.9.4.jar lucene-spatial3d-lucene-spatial3d-6.2.1.jar\r\njopt-simple-jopt-simple-5.0.2.jar lucene-suggest-lucene-suggest-6.2.1.jar\r\nlucene-analyzers-common-lucene-analyzers-common-6.2.1.jar securesm-securesm-1.1.jar\r\nlucene-backward-codecs-lucene-backward-codecs-6.2.1.jar snakeyaml-snakeyaml-1.15.jar\r\nlucene-core-lucene-core-6.2.1.jar t-digest-t-digest-3.0.jar\r\nlucene-grouping-lucene-grouping-6.2.1.jar\r\n```\r\n\r\n**SBT** (version 0.13.12)\r\n\r\nLet's create a simple build file (`build.sbt`):\r\n\r\n```\r\nname := \"test\"\r\n\r\nresolvers += Resolver.mavenLocal\r\n\r\nlibraryDependencies ++= Seq(\r\n \"org.elasticsearch\" % \"elasticsearch\" % \"5.0.0\"\r\n)\r\n\r\n```\r\n\r\nand run `sbt 'show runtime:fullClasspath'`\r\n\r\n```\r\nList(Attributed(/Users/ywelsch/dev/sbt-example/target/scala-2.10/classes), Attributed(/Users/ywelsch/.sbt/boot/scala-2.10.6/lib/scala-library.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.elasticsearch/elasticsearch/jars/elasticsearch-5.0.0.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-core/jars/lucene-core-6.2.0.jar), Attributed(/Users/ywelsch/.m2/repository/org/elasticsearch/securesm/1.1/securesm-1.1.jar), Attributed(/Users/ywelsch/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.2/jopt-simple-5.0.2.jar), Attributed(/Users/ywelsch/.m2/repository/com/carrotsearch/hppc/0.7.1/hppc-0.7.1.jar), Attributed(/Users/ywelsch/.m2/repository/joda-time/joda-time/2.9.4/joda-time-2.9.4.jar), Attributed(/Users/ywelsch/.m2/repository/org/joda/joda-convert/1.2/joda-convert-1.2.jar), Attributed(/Users/ywelsch/.m2/repository/org/yaml/snakeyaml/1.15/snakeyaml-1.15.jar), Attributed(/Users/ywelsch/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.8.1/jackson-core-2.8.1.jar), Attributed(/Users/ywelsch/.m2/repository/com/tdunning/t-digest/3.0/t-digest-3.0.jar), Attributed(/Users/ywelsch/.m2/repository/org/hdrhistogram/HdrHistogram/2.1.6/HdrHistogram-2.1.6.jar), Attributed(/Users/ywelsch/.m2/repository/net/java/dev/jna/jna/4.2.2/jna-4.2.2.jar))\r\n```\r\n\r\nSame as for Ivy, only 12 artifacts instead of the 29 that we expect.\r\n\r\nReplace the 5.0.0 dependency by 5.0.1-SNAPSHOT in the build file and run same command again:\r\n\r\n```\r\nList(Attributed(/Users/ywelsch/dev/sbt-example/target/scala-2.10/classes), Attributed(/Users/ywelsch/.sbt/boot/scala-2.10.6/lib/scala-library.jar), Attributed(/Users/ywelsch/.m2/repository/org/elasticsearch/elasticsearch/5.0.1-SNAPSHOT/elasticsearch-5.0.1-SNAPSHOT.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-core/jars/lucene-core-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-analyzers-common/jars/lucene-analyzers-common-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-backward-codecs/jars/lucene-backward-codecs-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-grouping/jars/lucene-grouping-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-highlighter/jars/lucene-highlighter-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-join/jars/lucene-join-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-memory/jars/lucene-memory-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-misc/jars/lucene-misc-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-queries/jars/lucene-queries-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-queryparser/jars/lucene-queryparser-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-sandbox/jars/lucene-sandbox-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-spatial/jars/lucene-spatial-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-spatial-extras/jars/lucene-spatial-extras-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-spatial3d/jars/lucene-spatial3d-6.2.1.jar), Attributed(/Users/ywelsch/.ivy2/cache/org.apache.lucene/lucene-suggest/jars/lucene-suggest-6.2.1.jar), Attributed(/Users/ywelsch/.m2/repository/org/elasticsearch/securesm/1.1/securesm-1.1.jar), Attributed(/Users/ywelsch/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.2/jopt-simple-5.0.2.jar), Attributed(/Users/ywelsch/.m2/repository/com/carrotsearch/hppc/0.7.1/hppc-0.7.1.jar), Attributed(/Users/ywelsch/.m2/repository/joda-time/joda-time/2.9.4/joda-time-2.9.4.jar), Attributed(/Users/ywelsch/.m2/repository/org/joda/joda-convert/1.2/joda-convert-1.2.jar), Attributed(/Users/ywelsch/.m2/repository/org/yaml/snakeyaml/1.15/snakeyaml-1.15.jar), Attributed(/Users/ywelsch/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.8.1/jackson-core-2.8.1.jar), Attributed(/Users/ywelsch/.m2/repository/com/fasterxml/jackson/dataformat/jackson-dataformat-smile/2.8.1/jackson-dataformat-smile-2.8.1.jar), Attributed(/Users/ywelsch/.m2/repository/com/fasterxml/jackson/dataformat/jackson-dataformat-yaml/2.8.1/jackson-dataformat-yaml-2.8.1.jar), Attributed(/Users/ywelsch/.m2/repository/com/fasterxml/jackson/dataformat/jackson-dataformat-cbor/2.8.1/jackson-dataformat-cbor-2.8.1.jar), Attributed(/Users/ywelsch/.m2/repository/com/tdunning/t-digest/3.0/t-digest-3.0.jar), Attributed(/Users/ywelsch/.m2/repository/org/hdrhistogram/HdrHistogram/2.1.6/HdrHistogram-2.1.6.jar), Attributed(/Users/ywelsch/.m2/repository/net/java/dev/jna/jna/4.2.2/jna-4.2.2.jar))\r\n```\r\n\r\nAll 29 artifacts are there.", "number": 21234, "review_comments": [], "title": "Generate POM files with non-wildcard excludes" }
{ "commits": [ { "message": "Generate POM files with non-wildcard excludes\n\nDependencies are currently marked as non-transitive in generated POM files by adding a wildcard (*) exclusion.\nThis does not play with the Apache Ivy dependency manager as it interprets these wildcards to exclude the main artifact of the dependency as well.\nThis commit uses explicit excludes for each transitive artifact instead to ensure that the main artifact is not excluded." }, { "message": "Added link to Ivy issue and extended Javadoc comment" } ], "files": [ { "diff": "@@ -28,6 +28,7 @@ import org.gradle.api.Task\n import org.gradle.api.XmlProvider\n import org.gradle.api.artifacts.Configuration\n import org.gradle.api.artifacts.ModuleDependency\n+import org.gradle.api.artifacts.ModuleVersionIdentifier\n import org.gradle.api.artifacts.ProjectDependency\n import org.gradle.api.artifacts.ResolvedArtifact\n import org.gradle.api.artifacts.dsl.RepositoryHandler\n@@ -294,12 +295,15 @@ class BuildPlugin implements Plugin<Project> {\n * Returns a closure which can be used with a MavenPom for fixing problems with gradle generated poms.\n *\n * <ul>\n- * <li>Remove transitive dependencies (using wildcard exclusions, fixed in gradle 2.14)</li>\n- * <li>Set compile time deps back to compile from runtime (known issue with maven-publish plugin)\n+ * <li>Remove transitive dependencies. We currently exclude all artifacts explicitly instead of using wildcards\n+ * as Ivy incorrectly translates POMs with * excludes to Ivy XML with * excludes which results in the main artifact\n+ * being excluded as well (see https://issues.apache.org/jira/browse/IVY-1531). Note that Gradle 2.14+ automatically\n+ * translates non-transitive dependencies to * excludes. We should revisit this when upgrading Gradle.</li>\n+ * <li>Set compile time deps back to compile from runtime (known issue with maven-publish plugin)</li>\n * </ul>\n */\n private static Closure fixupDependencies(Project project) {\n- // TODO: remove this when enforcing gradle 2.14+, it now properly handles exclusions\n+ // TODO: revisit this when upgrading to Gradle 2.14+, see Javadoc comment above\n return { XmlProvider xml ->\n // first find if we have dependencies at all, and grab the node\n NodeList depsNodes = xml.asNode().get('dependencies')\n@@ -334,10 +338,19 @@ class BuildPlugin implements Plugin<Project> {\n continue\n }\n \n- // we now know we have something to exclude, so add a wildcard exclusion element\n- Node exclusion = depNode.appendNode('exclusions').appendNode('exclusion')\n- exclusion.appendNode('groupId', '*')\n- exclusion.appendNode('artifactId', '*')\n+ // we now know we have something to exclude, so add exclusions for all artifacts except the main one\n+ Node exclusions = depNode.appendNode('exclusions')\n+ for (ResolvedArtifact artifact : artifacts) {\n+ ModuleVersionIdentifier moduleVersionIdentifier = artifact.moduleVersion.id;\n+ String depGroupId = moduleVersionIdentifier.group\n+ String depArtifactId = moduleVersionIdentifier.name\n+ // add exclusions for all artifacts except the main one\n+ if (depGroupId != groupId || depArtifactId != artifactId) {\n+ Node exclusion = exclusions.appendNode('exclusion')\n+ exclusion.appendNode('groupId', depGroupId)\n+ exclusion.appendNode('artifactId', depArtifactId)\n+ }\n+ }\n }\n }\n }", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue. Note that whether you're filing a bug report or a\nfeature request, ensure that your submission is for an\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\nBug reports on an OS that we do not support or feature requests\nspecific to an OS that we do not support will be closed.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 5.0\n\n**Describe the feature**:\n\nSeen a lot of error `entity content is too long [10800427] for the configured buffer limit [10485760]` when reindexing from remote, please make buffer limit configurable, currently it's hardcoded to 10MB.\n", "comments": [ { "body": "I'll add this to my list of things to investigate on Monday. You can work around it by using a smaller batch size which looks like:\n\n```\nPOST _reindex\n{\n \"source\": {\n \"index\": \"source\",\n \"size\": 100\n },\n \"dest\": {\n \"index\": \"dest\"\n }\n}\n```\n\nMy instinct is that a buffer limit of 100mb is more appropriate than 10mb for reindex-from-remote and if you want to go beyond that you should use a smaller batch size. Buffers larger than 100mb are likely to send similarly sized bulks and we know bulk performance suffers with batches that large.\n", "created_at": "2016-10-30T02:54:01Z" }, { "body": "[reproduction](https://gist.github.com/nik9000/526a45ea9a231d01598d8dafa0704ca5)\n", "created_at": "2016-10-31T19:50:45Z" }, { "body": "@nik9000 , [docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) are showing that the buffer size is 200mb in 5.x, but i'm hitting a 100mb limit in 5.1.1:\r\n\r\n```\r\n{\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"Remote responded with a chunk that was too large. Use a smaller batch size.\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"Remote responded with a chunk that was too large. Use a smaller batch size.\",\"caused_by\":{\"type\":\"content_too_long_exception\",\"reason\":\"entity content is too long [245498346] for the configured buffer limit [104857600]\"}},\"status\":400}Done with <removed>, moving to the next one...\r\n```", "created_at": "2017-01-04T18:17:57Z" }, { "body": "> @nik9000 , docs are showing that the buffer size is 200mb in 5.x, but i'm hitting a 100mb limit in 5.1.1:\r\n\r\nYeah, I remember fixing that but I it looks like I fixed it in a PR that went only to 5.2+.", "created_at": "2017-01-04T18:46:36Z" }, { "body": "@PhaedrusTheGreek, I pushed 1294035aa7b4549beeeaff22b5403c3479c1bc5d to 5.1 to fix the docs in 5.1. It looks right in 5.x.", "created_at": "2017-01-05T18:28:28Z" }, { "body": "This memory limit **really** needs to be configurable. The limit that's currently in place makes remote reindexing a nightmare. I have one of two options:\r\n\r\nOption 1:\r\nReindex all the indexes with a `size` of 1 to ensure I don't hit this limit. This will take an immense amount of time because of how slow it will be.\r\n\r\nOption 2:\r\nRun the reindex with size of 3000. If it fails, try again with 1000. If that fails, try again with 500. If that fails, try again with 100. If that fails, try again with 50. If that fails, try again with 25......... (the numbers here don't matter but the point is -- with an index dozens of gigabytes, I can't be sure what the maximum size that I can use is without actually running the reindex).\r\n\r\nIf the limit can't be changed due to other issues that it would cause, then is it possible to add the ability to resume a reindex instead of having to start over every time? Thank you!", "created_at": "2017-03-11T04:54:59Z" }, { "body": "> If the limit can't be changed due to other issues that it would cause, then is it possible to add the ability to resume a reindex instead of having to start over every time? Thank you!\r\n\r\nNot really.... Not with the tools we have now. With tools we're building, maybe.\r\n\r\nI wonder how expensive it'd be to have a script that filtered out documents that are huge. Then you can do them in two passes. That'd be a thing you could do right now. It'd be better for the nodes then allowing the buffer to balloon up uncontrollably. Not great for usability, obviously, but better than nothing.", "created_at": "2017-03-16T16:32:42Z" }, { "body": "can change the configured buffer limit with configure file or curl?", "created_at": "2017-08-31T08:53:11Z" }, { "body": "Big :+1: from me on configurable buffer size. Have been doing a reindex-from-remote of 21million odd documents, where most are tiny, but there are a few big ones peppered in to make my life fun. Have to run a small batch size -- like egyptianbman, constantly trying again with smaller and smaller until I find one that doesn't eventually die -- as well as babysit it to make sure it doesn't stop without me noticing. Less than ideal.", "created_at": "2017-09-10T06:30:06Z" }, { "body": "Would really love it if this was looked into again. The buffer limit of 100mb just seems to be way too low. When reindexing from remote this seems to be my bottleneck for indexing rate. I even increased by node count 3x and still received the same performance because I'm having to use a size of 10 just to get docs indexed. ", "created_at": "2018-07-07T16:01:55Z" }, { "body": "was anything done for this?", "created_at": "2020-01-31T15:55:09Z" }, { "body": "Is there any way to configure this yet? It appears to be the bottleneck for us doing remote reindex as well and thus we are forced to use a very low batch size considering our cluster size.", "created_at": "2020-03-31T07:22:43Z" }, { "body": "Piling on here... I ran into this issue during a migration from ES 2.x to 6.x, and was unable to complete the reindexing process even with a batch size of `1`. I know it's ridiculous to have a document that large in the index, but that's another discussion.\r\n\r\nIf nothing else, I'd suggest adjusting this bit of the docs:\r\n> Reindexing from a remote server uses an on-heap buffer that defaults to a maximum size of 100mb.\r\n\r\nThe word \"defaults\" there is misleading, since there's no way to configure/override that limit. In other words, it is effectively a hard limit, and the documentation should reflect that.", "created_at": "2020-07-10T15:32:01Z" } ], "number": 21185, "title": "Reindex http entity content buffer limit should be configurable" }
{ "body": "It was 10mb and that was causing trouble when folks reindex-from-remoted\r\nwith large documents.\r\n\r\nWe also improve the error reporting so it tells folks to use a smaller\r\nbatch size if they hit a buffer size exception. Finally, adds some docs\r\nto reindex-from-remote mentioning the buffer and giving an example of\r\nlowering the size.\r\n\r\nCloses #21185", "number": 21222, "review_comments": [ { "body": "Exposed for testing.\n", "created_at": "2016-10-31T21:07:22Z" }, { "body": "I'll push a commit with a comment explaining the buffer size in minue.\n", "created_at": "2016-10-31T21:07:54Z" }, { "body": "This was just an oversight that I fixed while I was here.\n", "created_at": "2016-10-31T21:08:10Z" }, { "body": "I'd really like to see either the response type changed to `ByteSizeValue`, or the name changed to `getBufferLimitBytes()` (preferably the first one)\n", "created_at": "2016-10-31T21:49:13Z" }, { "body": "I also think it'd be good to have `HeapBufferedAsyncResponseConsumer` take a `ByteSizeValue` instead of an `int` of bytes, much less chance of misinterpretation.\n", "created_at": "2016-10-31T21:50:02Z" }, { "body": "I really don't think we should capture assertions and rethrow, it leaves us really open to typoing and swallowing test failures because we accidentally forgot to rethrow. I assume your intent here was to give more info if one of them failed? In that case, I think ordering them by most to least information would be better (though optional, I mostly care about removing anything catching asserts):\n\n``` java\nassertThat(e.getMessage(), containsString(\"Remote responded with a chunk that was too large. Use a smaller batch size.\"));\nassertSame(tooLong, e.getCause());\nassertFalse(called.get());\n```\n\nAnd then removing the try/catch altogether.\n", "created_at": "2016-10-31T21:53:19Z" }, { "body": "I think changing it to a `ByteSizeValue` would be a fairly big project because `ByteSizeValue` is in elasticsearch and this is in the client and they don't share code at this point. They _should_ be able to at some point, but they don't at this point. So `getBufferLimitBytes` it is, I think.\n", "created_at": "2016-11-01T12:39:39Z" }, { "body": "I usually don't capture and rethrow but this mechanism lets us get access to the failure stack trace in the suppressed exception. It is a bit hacky but it works. Maybe I'd be better off writing actual `Matcher` subclasses _designed_ for `Exception` subclasses so I can add the toString. I'll remove the rethrow bit and have a think about better debugging for exception matching as a followup.\n", "created_at": "2016-11-01T12:42:18Z" }, { "body": "Ahh I totally missed that this was client only, `getBufferLimitBytes` makes sense\n", "created_at": "2016-11-01T15:29:43Z" } ], "title": "Bump reindex-from-remote's buffer to 200mb" }
{ "commits": [ { "message": "Bump reindex-from-remote's buffer to 200mb\n\nIt was 10mb and that was causing trouble when folks reindex-from-remoted\nwith large documents.\n\nWe also improve the error reporting so it tells folks to use a smaller\nbatch size if they hit a buffer size exception. Finally, adds some docs\nto reindex-from-remote mentioning the buffer and giving an example of\nlowering the size.\n\nCloses #21185" } ], "files": [ { "diff": "@@ -46,15 +46,15 @@ public class HeapBufferedAsyncResponseConsumer extends AbstractAsyncResponseCons\n //default buffer limit is 10MB\n public static final int DEFAULT_BUFFER_LIMIT = 10 * 1024 * 1024;\n \n- private final int bufferLimit;\n+ private final int bufferLimitBytes;\n private volatile HttpResponse response;\n private volatile SimpleInputBuffer buf;\n \n /**\n * Creates a new instance of this consumer with a buffer limit of {@link #DEFAULT_BUFFER_LIMIT}\n */\n public HeapBufferedAsyncResponseConsumer() {\n- this.bufferLimit = DEFAULT_BUFFER_LIMIT;\n+ this.bufferLimitBytes = DEFAULT_BUFFER_LIMIT;\n }\n \n /**\n@@ -64,7 +64,14 @@ public HeapBufferedAsyncResponseConsumer(int bufferLimit) {\n if (bufferLimit <= 0) {\n throw new IllegalArgumentException(\"bufferLimit must be greater than 0\");\n }\n- this.bufferLimit = bufferLimit;\n+ this.bufferLimitBytes = bufferLimit;\n+ }\n+\n+ /**\n+ * Get the limit of the buffer.\n+ */\n+ public int getBufferLimit() {\n+ return bufferLimitBytes;\n }\n \n @Override\n@@ -75,9 +82,9 @@ protected void onResponseReceived(HttpResponse response) throws HttpException, I\n @Override\n protected void onEntityEnclosed(HttpEntity entity, ContentType contentType) throws IOException {\n long len = entity.getContentLength();\n- if (len > bufferLimit) {\n+ if (len > bufferLimitBytes) {\n throw new ContentTooLongException(\"entity content is too long [\" + len +\n- \"] for the configured buffer limit [\" + bufferLimit + \"]\");\n+ \"] for the configured buffer limit [\" + bufferLimitBytes + \"]\");\n }\n if (len < 0) {\n len = 4096;", "filename": "client/rest/src/main/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumer.java", "status": "modified" }, { "diff": "@@ -421,6 +421,42 @@ version.\n To enable queries sent to older versions of Elasticsearch the `query` parameter\n is sent directly to the remote host without validation or modification.\n \n+Reindexing from a remote server uses an on-heap buffer that defaults to a\n+maximum size of 200mb. If the remote index includes very large documents you'll\n+need to use a smaller batch size. The example below sets the batch size `10`\n+which is very, very small.\n+\n+[source,js]\n+--------------------------------------------------\n+POST _reindex\n+{\n+ \"source\": {\n+ \"remote\": {\n+ \"host\": \"http://otherhost:9200\",\n+ \"username\": \"user\",\n+ \"password\": \"pass\"\n+ },\n+ \"index\": \"source\",\n+ \"size\": 10,\n+ \"query\": {\n+ \"match\": {\n+ \"test\": \"data\"\n+ }\n+ }\n+ },\n+ \"dest\": {\n+ \"index\": \"dest\"\n+ }\n+}\n+--------------------------------------------------\n+// CONSOLE\n+// TEST[setup:host]\n+// TEST[s/^/PUT source\\n/]\n+// TEST[s/otherhost:9200\",/\\${host}\"/]\n+// TEST[s/\"username\": \"user\",//]\n+// TEST[s/\"password\": \"pass\"//]\n+\n+\n [float]\n === URL Parameters\n ", "filename": "docs/reference/docs/reindex.asciidoc", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.reindex.remote;\n \n+import org.apache.http.ContentTooLongException;\n import org.apache.http.HttpEntity;\n import org.apache.http.util.EntityUtils;\n import org.apache.logging.log4j.Logger;\n@@ -29,6 +30,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.action.bulk.BackoffPolicy;\n import org.elasticsearch.action.search.SearchRequest;\n+import org.elasticsearch.client.HeapBufferedAsyncResponseConsumer;\n import org.elasticsearch.client.ResponseException;\n import org.elasticsearch.client.ResponseListener;\n import org.elasticsearch.client.RestClient;\n@@ -37,6 +39,8 @@\n import org.elasticsearch.common.ParseFieldMatcherSupplier;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.ThreadContext;\n@@ -67,6 +71,10 @@\n import static org.elasticsearch.index.reindex.remote.RemoteResponseParsers.RESPONSE_PARSER;\n \n public class RemoteScrollableHitSource extends ScrollableHitSource {\n+ /**\n+ * The maximum size of the remote response to buffer. 200mb because bulks beyond 40mb tend to be slow anyway but 200mb is simply huge.\n+ */\n+ private static final ByteSizeValue BUFFER_LIMIT = new ByteSizeValue(200, ByteSizeUnit.MB);\n private final RestClient client;\n private final BytesReference query;\n private final SearchRequest searchRequest;\n@@ -142,7 +150,8 @@ class RetryHelper extends AbstractRunnable {\n \n @Override\n protected void doRun() throws Exception {\n- client.performRequestAsync(method, uri, params, entity, new ResponseListener() {\n+ HeapBufferedAsyncResponseConsumer consumer = new HeapBufferedAsyncResponseConsumer(BUFFER_LIMIT.bytesAsInt());\n+ client.performRequestAsync(method, uri, params, entity, consumer, new ResponseListener() {\n @Override\n public void onSuccess(org.elasticsearch.client.Response response) {\n // Restore the thread context to get the precious headers\n@@ -184,6 +193,9 @@ public void onFailure(Exception e) {\n }\n e = wrapExceptionToPreserveStatus(re.getResponse().getStatusLine().getStatusCode(),\n re.getResponse().getEntity(), re);\n+ } else if (e instanceof ContentTooLongException) {\n+ e = new IllegalArgumentException(\n+ \"Remote responded with a chunk that was too large. Use a smaller batch size.\", e);\n }\n fail.accept(e);\n }", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSource.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.reindex.remote;\n \n+import org.apache.http.ContentTooLongException;\n import org.apache.http.HttpEntity;\n import org.apache.http.HttpEntityEnclosingRequest;\n import org.apache.http.HttpHost;\n@@ -39,10 +40,13 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.action.bulk.BackoffPolicy;\n import org.elasticsearch.action.search.SearchRequest;\n+import org.elasticsearch.client.HeapBufferedAsyncResponseConsumer;\n import org.elasticsearch.client.RestClient;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.io.Streams;\n+import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.index.reindex.ScrollableHitSource.Response;\n@@ -76,7 +80,7 @@\n import static org.mockito.Mockito.when;\n \n public class RemoteScrollableHitSourceTests extends ESTestCase {\n- private final String FAKE_SCROLL_ID = \"DnF1ZXJ5VGhlbkZldGNoBQAAAfakescroll\";\n+ private static final String FAKE_SCROLL_ID = \"DnF1ZXJ5VGhlbkZldGNoBQAAAfakescroll\";\n private int retries;\n private ThreadPool threadPool;\n private SearchRequest searchRequest;\n@@ -429,6 +433,39 @@ public void testWrapExceptionToPreserveStatus() throws IOException {\n assertEquals(badEntityException, wrapped.getSuppressed()[0]);\n }\n \n+ @SuppressWarnings({ \"unchecked\", \"rawtypes\" })\n+ public void testTooLargeResponse() throws Exception {\n+ ContentTooLongException tooLong = new ContentTooLongException(\"too long!\");\n+ CloseableHttpAsyncClient httpClient = mock(CloseableHttpAsyncClient.class);\n+ when(httpClient.<HttpResponse>execute(any(HttpAsyncRequestProducer.class), any(HttpAsyncResponseConsumer.class),\n+ any(FutureCallback.class))).then(new Answer<Future<HttpResponse>>() {\n+ @Override\n+ public Future<HttpResponse> answer(InvocationOnMock invocationOnMock) throws Throwable {\n+ HeapBufferedAsyncResponseConsumer consumer = (HeapBufferedAsyncResponseConsumer) invocationOnMock.getArguments()[1];\n+ FutureCallback callback = (FutureCallback) invocationOnMock.getArguments()[2];\n+\n+ assertEquals(new ByteSizeValue(200, ByteSizeUnit.MB).bytesAsInt(), consumer.getBufferLimit());\n+ callback.failed(tooLong);\n+ return null;\n+ }\n+ });\n+ RemoteScrollableHitSource source = sourceWithMockedClient(true, httpClient);\n+\n+ AtomicBoolean called = new AtomicBoolean();\n+ Consumer<Response> checkResponse = r -> called.set(true);\n+ Throwable e = expectThrows(RuntimeException.class,\n+ () -> source.doStartNextScroll(FAKE_SCROLL_ID, timeValueMillis(0), checkResponse));\n+ // Unwrap the some artifacts from the test\n+ while (e.getMessage().equals(\"failed\")) {\n+ e = e.getCause();\n+ }\n+ // This next exception is what the user sees\n+ assertEquals(\"Remote responded with a chunk that was too large. Use a smaller batch size.\", e.getMessage());\n+ // And that exception is reported as being caused by the underlying exception returned by the client\n+ assertSame(tooLong, e.getCause());\n+ assertFalse(called.get());\n+ }\n+\n private RemoteScrollableHitSource sourceWithMockedRemoteCall(String... paths) throws Exception {\n return sourceWithMockedRemoteCall(true, paths);\n }\n@@ -482,7 +519,11 @@ public Future<HttpResponse> answer(InvocationOnMock invocationOnMock) throws Thr\n return null;\n }\n });\n+ return sourceWithMockedClient(mockRemoteVersion, httpClient);\n+ }\n \n+ private RemoteScrollableHitSource sourceWithMockedClient(boolean mockRemoteVersion, CloseableHttpAsyncClient httpClient)\n+ throws Exception {\n HttpAsyncClientBuilder clientBuilder = mock(HttpAsyncClientBuilder.class);\n when(clientBuilder.build()).thenReturn(httpClient);\n ", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSourceTests.java", "status": "modified" } ] }
{ "body": "Closes #20992 \n\nMove the delete to `finally` block to ensure the file will be deleted when there is a left over temp file from a crash. \n", "comments": [ { "body": "test this please\n", "created_at": "2016-10-19T01:41:16Z" }, { "body": "The Jenkins build seems contains many other commits?\n", "created_at": "2016-10-19T06:03:00Z" }, { "body": "test this please\n", "created_at": "2016-10-23T19:06:29Z" }, { "body": "The tests are successful on CI - https://elasticsearch-ci.elastic.co/job/elastic-elasticsearch-pull-request/582/\n\nSent from my iPhone\n\n> On 24 Oct 2016, at 3:07 AM, Boaz Leskes notifications@github.com wrote:\n> \n> test this please\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "created_at": "2016-10-25T18:18:02Z" }, { "body": "@bleskes this change is doing exactly the same thing it was doing before. It only deletes the file if the creation was successful. I think we should revert it and make it try to delete the file even if we run into an exception this can possibly prevent the issue but if the system crashes in between there is nothing we can do about it. Either way this change is only confusing.\n", "created_at": "2016-10-26T15:30:48Z" }, { "body": "@s1monw bleh. I have no idea what I was thinking. What I was going for is not hide the original exception from the file creation. I'll fix.\n", "created_at": "2016-10-26T15:43:02Z" }, { "body": "I suggest if we detect the file already exists, try to delete it first then create it again, then delete it in the finally block.\n", "created_at": "2016-10-27T15:34:23Z" } ], "number": 21007, "title": ".es_temp_file remains after system crash, causing it not to start again" }
{ "body": "When ES starts up we verify we can write to all data files by creating and deleting a temp file called `.es_temp_file`. If for some reason the file was successfully created but not successfully deleted, we still shut down correctly but subsequent start attempts will fail with a file already exists exception.\r\n\r\nThis PR makes sure to first clean any existing `.es_temp_file`\r\n\r\nSuperseeds #21007", "number": 21210, "review_comments": [ { "body": "maybe don't change the name if you want to delete it first\n", "created_at": "2016-11-01T10:18:39Z" }, { "body": "maybe move all of the non-write operations out of the try block then the exception makes sense again\n", "created_at": "2016-11-01T10:20:05Z" }, { "body": "if we do this, then we can also just do `if (Files.exists(src))` here?\n", "created_at": "2016-11-01T10:23:52Z" } ], "title": "Node should start up despite of a lingering `.es_temp_file`" }
{ "commits": [ { "message": "Node should start up despite of a lingering `.es_temp_file`\n\nWhen ES starts up we verify we can write to all data files by creating and deleting a temp file called `.es_temp_file`. If for some reason the file was successfully created but not successfully deleted, we still shut down correctly but subsequent start attempts will fail with a file already exists exception.\n\n This commit makes sure to first clean any existing `.es_temp_file`" }, { "message": "also clean up existing temp files for atomic move support checks" }, { "message": "Merge remote-tracking branch 'upstream/master' into node_cleanup_temp" }, { "message": "fix compilation" }, { "message": "fix NodeEnvironmentEvilTests" } ], "files": [ { "diff": "@@ -901,11 +901,12 @@ public void ensureAtomicMoveSupported() throws IOException {\n final NodePath[] nodePaths = nodePaths();\n for (NodePath nodePath : nodePaths) {\n assert Files.isDirectory(nodePath.path) : nodePath.path + \" is not a directory\";\n- final Path src = nodePath.path.resolve(\"__es__.tmp\");\n- final Path target = nodePath.path.resolve(\"__es__.final\");\n+ final Path src = nodePath.path.resolve(TEMP_FILE_NAME + \".tmp\");\n+ final Path target = nodePath.path.resolve(TEMP_FILE_NAME + \".final\");\n try {\n+ Files.deleteIfExists(src);\n Files.createFile(src);\n- Files.move(src, target, StandardCopyOption.ATOMIC_MOVE);\n+ Files.move(src, target, StandardCopyOption.ATOMIC_MOVE, StandardCopyOption.REPLACE_EXISTING);\n } catch (AtomicMoveNotSupportedException ex) {\n throw new IllegalStateException(\"atomic_move is not supported by the filesystem on path [\"\n + nodePath.path\n@@ -1005,19 +1006,19 @@ private void assertCanWrite() throws IOException {\n }\n }\n \n+ // package private for testing\n+ static final String TEMP_FILE_NAME = \".es_temp_file\";\n+\n private static void tryWriteTempFile(Path path) throws IOException {\n if (Files.exists(path)) {\n- Path resolve = path.resolve(\".es_temp_file\");\n- boolean tempFileCreated = false;\n+ Path resolve = path.resolve(TEMP_FILE_NAME);\n try {\n+ // delete any lingering file from a previous failure\n+ Files.deleteIfExists(resolve);\n Files.createFile(resolve);\n- tempFileCreated = true;\n+ Files.delete(resolve);\n } catch (IOException ex) {\n- throw new IOException(\"failed to write in data directory [\" + path + \"] write permission is required\", ex);\n- } finally {\n- if (tempFileCreated) {\n- Files.deleteIfExists(resolve);\n- }\n+ throw new IOException(\"failed to test writes in data directory [\" + path + \"] write permission is required\", ex);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.test.IndexSettingsModule;\n \n import java.io.IOException;\n+import java.nio.file.AtomicMoveNotSupportedException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.ArrayList;\n@@ -416,6 +417,39 @@ public void testPersistentNodeId() throws IOException {\n env.close();\n }\n \n+ public void testExistingTempFiles() throws IOException {\n+ String[] paths = tmpPaths();\n+ // simulate some previous left over temp files\n+ for (String path : randomSubsetOf(randomIntBetween(1, paths.length), paths)) {\n+ final Path nodePath = NodeEnvironment.resolveNodePath(PathUtils.get(path), 0);\n+ Files.createDirectories(nodePath);\n+ Files.createFile(nodePath.resolve(NodeEnvironment.TEMP_FILE_NAME));\n+ if (randomBoolean()) {\n+ Files.createFile(nodePath.resolve(NodeEnvironment.TEMP_FILE_NAME + \".tmp\"));\n+ }\n+ if (randomBoolean()) {\n+ Files.createFile(nodePath.resolve(NodeEnvironment.TEMP_FILE_NAME + \".final\"));\n+ }\n+ }\n+ NodeEnvironment env = newNodeEnvironment(paths, Settings.EMPTY);\n+ try {\n+ env.ensureAtomicMoveSupported();\n+ } catch (AtomicMoveNotSupportedException e) {\n+ // that's OK :)\n+ }\n+ env.close();\n+ // check we clean up\n+ for (String path: paths) {\n+ final Path nodePath = NodeEnvironment.resolveNodePath(PathUtils.get(path), 0);\n+ final Path tempFile = nodePath.resolve(NodeEnvironment.TEMP_FILE_NAME);\n+ assertFalse(tempFile + \" should have been cleaned\", Files.exists(tempFile));\n+ final Path srcTempFile = nodePath.resolve(NodeEnvironment.TEMP_FILE_NAME + \".src\");\n+ assertFalse(srcTempFile + \" should have been cleaned\", Files.exists(srcTempFile));\n+ final Path targetTempFile = nodePath.resolve(NodeEnvironment.TEMP_FILE_NAME + \".target\");\n+ assertFalse(targetTempFile + \" should have been cleaned\", Files.exists(targetTempFile));\n+ }\n+ }\n+\n /** Converts an array of Strings to an array of Paths, adding an additional child if specified */\n private Path[] stringsToPaths(String[] strings, String additional) {\n Path[] locations = new Path[strings.length];", "filename": "core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java", "status": "modified" }, { "diff": "@@ -30,7 +30,6 @@\n import java.nio.file.attribute.PosixFileAttributeView;\n import java.nio.file.attribute.PosixFilePermission;\n import java.util.Arrays;\n-import java.util.Collections;\n import java.util.HashSet;\n \n public class NodeEnvironmentEvilTests extends ESTestCase {\n@@ -75,7 +74,7 @@ public void testMissingWritePermissionOnIndex() throws IOException {\n IOException ioException = expectThrows(IOException.class, () -> {\n new NodeEnvironment(build, new Environment(build));\n });\n- assertTrue(ioException.getMessage(), ioException.getMessage().startsWith(\"failed to write in data directory\"));\n+ assertTrue(ioException.getMessage(), ioException.getMessage().startsWith(\"failed to test writes in data directory\"));\n }\n }\n \n@@ -100,7 +99,7 @@ public void testMissingWritePermissionOnShard() throws IOException {\n IOException ioException = expectThrows(IOException.class, () -> {\n new NodeEnvironment(build, new Environment(build));\n });\n- assertTrue(ioException.getMessage(), ioException.getMessage().startsWith(\"failed to write in data directory\"));\n+ assertTrue(ioException.getMessage(), ioException.getMessage().startsWith(\"failed to test writes in data directory\"));\n }\n }\n }", "filename": "qa/evil-tests/src/test/java/org/elasticsearch/env/NodeEnvironmentEvilTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: latest master (2b43c6db3a5bdd822e14b091925bae609b4e286e)\n\n**Plugins installed**: []\n\n**Description of the problem including expected versus actual behavior**: When I run \n\n`elsticsearch-plugin list` \n\nwith no plugin installed I would expect the list returned is empty. Instead I get:\n\n```\n$ ./elasticsearch-5.0.0-alpha6-SNAPSHOT/bin/elasticsearch-plugin list\nException in thread \"main\" java.io.IOException: Plugins directory missing: /home/britta/es-token-plugin/src/test/resources/r-demo/elasticsearch-5.0.0-alpha6-SNAPSHOT/plugins\n at org.elasticsearch.plugins.ListPluginsCommand.execute(ListPluginsCommand.java:51)\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:88)\n at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:69)\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:88)\n at org.elasticsearch.cli.Command.main(Command.java:54)\n at org.elasticsearch.plugins.PluginCli.main(PluginCli.java:57)\n\n```\n", "comments": [ { "body": "That's indeed happening with a new installation of elasticsearch when elasticsearch has never been launched.\nOnce elasticsearch is launched, it creates the missing `plugins` dir.\n", "created_at": "2016-09-06T15:14:03Z" }, { "body": "Looking at the code it was intentional to fail instead of reporting an empty list and eventually log the fact that the dir is missing.\n@rjernst Do you remember this?\n", "created_at": "2016-09-06T15:18:02Z" }, { "body": "I remember there is leniency for the dir not existing in the plugin manager because, IIRC, of tests not explicitly creating the plugins dir. I think we should add that and keep this a hard failure? All the packages should already have a plugins dir created right?\n", "created_at": "2016-09-06T18:45:45Z" }, { "body": "Perhaps it is also the zip/tar.gz that don't have an empty plugins dir (seems that might be the case here). I think we can add that so all the packages don't get this failure.\n", "created_at": "2016-09-06T18:47:04Z" }, { "body": "Indeed. If doable, I'm +1 to have plugins dir in our archives. \n", "created_at": "2016-09-06T20:37:43Z" }, { "body": "+1 to always having a plugins dir.\n", "created_at": "2016-10-31T13:59:53Z" } ], "number": 20342, "title": "elsticsearch-plugin list results in error message if there are no plugins installed" }
{ "body": "Today when installing Elasticsearch from an archive distribution (tar.gz\r\nor zip), an empty plugins folder is not included. This means that if you\r\ninstall Elasticsearch and immediately run elasticsearch-plugin list, you\r\nwill receive an error message about the plugins directory missing. While\r\nthe plugins directory would be created when starting Elasticsearch for\r\nthe first time, it would be better to just include an empty plugins\r\ndirectory in the archive distributions. This commit makes this the\r\ncase. Note that the package distributions already include an empty\r\nplugins folder.\r\n\r\nCloses #20342", "number": 21204, "review_comments": [ { "body": "s/runs/runs without any plugins installed/\n", "created_at": "2016-11-01T18:31:48Z" } ], "title": "Add empty plugins dir for archive distributions" }
{ "commits": [ { "message": "Add empty plugins dir for archive distributions\n\nToday when installing Elasticsearch from an archive distribution (tar.gz\nor zip), an empty plugins folder is not included. This means that if you\ninstall Elasticsearch and immediately run elasticsearch-plugin list, you\nwill receive an error message about the plugins directory missing. While\nthe plugins directory would be created when starting Elasticsearch for\nthe first time, it would be better to just include an empty plugins\ndirectory in the archive distributions. This commit makes this the\ncase. Note that the package distributions already include an empty\nplugins folder." }, { "message": "Add test that listing plugins works after install\n\nThis commit adds a test that listing plugins does not fail with an\nexception after installing Elasticsearch but before running\nElasticsearch for the first time." }, { "message": "Change name of plugin list test\n\nThis commit changes the name of the plugin list packaging test so that\nthe intent is obvious." } ], "files": [ { "diff": "@@ -214,6 +214,17 @@ configure(subprojects.findAll { ['zip', 'tar', 'integ-test-zip'].contains(it.nam\n MavenFilteringHack.filter(it, expansions)\n }\n }\n+ into('') {\n+ // CopySpec does not make it easy to create an empty directory\n+ // so we create the directory that we want, and then point\n+ // CopySpec to its parent to copy to the root of the\n+ // distribution\n+ File plugins = new File(buildDir, 'plugins-hack/plugins')\n+ plugins.mkdirs()\n+ from {\n+ plugins.getParent()\n+ }\n+ }\n with commonFiles\n from('../src/main/resources') {\n include 'bin/*.exe'", "filename": "distribution/build.gradle", "status": "modified" }, { "diff": "@@ -73,6 +73,13 @@ setup() {\n verify_archive_installation\n }\n \n+@test \"[TAR] verify elasticsearch-plugin list runs without any plugins installed\" {\n+ # previously this would fail because the archive installations did\n+ # not create an empty plugins directory\n+ local plugins_list=`$ESHOME/bin/elasticsearch-plugin list`\n+ [[ -z $plugins_list ]]\n+}\n+\n @test \"[TAR] elasticsearch fails if java executable is not found\" {\n local JAVA=$(which java)\n ", "filename": "qa/vagrant/src/test/resources/packaging/scripts/20_tar_package.bats", "status": "modified" }, { "diff": "@@ -74,6 +74,11 @@ setup() {\n verify_package_installation\n }\n \n+@test \"[DEB] verify elasticsearch-plugin list runs without any plugins installed\" {\n+ local plugins_list=`$ESHOME/bin/elasticsearch-plugin list`\n+ [[ -z $plugins_list ]]\n+}\n+\n @test \"[DEB] elasticsearch isn't started by package install\" {\n # Wait a second to give Elasticsearch a change to start if it is going to.\n # This isn't perfect by any means but its something.", "filename": "qa/vagrant/src/test/resources/packaging/scripts/30_deb_package.bats", "status": "modified" }, { "diff": "@@ -73,6 +73,11 @@ setup() {\n verify_package_installation\n }\n \n+@test \"[RPM] verify elasticsearch-plugin list runs without any plugins installed\" {\n+ local plugins_list=`$ESHOME/bin/elasticsearch-plugin list`\n+ [[ -z $plugins_list ]]\n+}\n+\n @test \"[RPM] elasticsearch isn't started by package install\" {\n # Wait a second to give Elasticsearch a change to start if it is going to.\n # This isn't perfect by any means but its something.", "filename": "qa/vagrant/src/test/resources/packaging/scripts/40_rpm_package.bats", "status": "modified" }, { "diff": "@@ -89,6 +89,7 @@ verify_archive_installation() {\n assert_file \"$ESCONFIG/elasticsearch.yml\" f elasticsearch elasticsearch 660\n assert_file \"$ESCONFIG/jvm.options\" f elasticsearch elasticsearch 660\n assert_file \"$ESCONFIG/log4j2.properties\" f elasticsearch elasticsearch 660\n+ assert_file \"$ESPLUGINS\" d elasticsearch elasticsearch 755\n assert_file \"$ESHOME/lib\" d elasticsearch elasticsearch 755\n assert_file \"$ESHOME/NOTICE.txt\" f elasticsearch elasticsearch 644\n assert_file \"$ESHOME/LICENSE.txt\" f elasticsearch elasticsearch 644", "filename": "qa/vagrant/src/test/resources/packaging/scripts/tar.bash", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.0.0-rc1\n\n**Plugins installed**: [none]\n\n**JVM version**:\n\n```\njava version \"1.8.0_111\"\nJava(TM) SE Runtime Environment (build 1.8.0_111-b14)\nJava HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)\n```\n\n**OS version**: Debian 8\n\n**Description of the problem including expected versus actual behavior**:\n\nCreating an example index with its mapping:\n\n```\nPUT twitter \n{\n \"mappings\": {\n \"tweet\": {\n \"properties\": {\n \"message\": {\n \"type\": \"text\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n}\n```\n\nEverything went fine:\n\n```\n{\n \"acknowledged\": true\n}\n```\n\nBut now, let's get the actual mapping with `GET twitter/_mapping/tweet`:\n\n```\n{\n \"twitter\": {\n \"mappings\": {\n \"tweet\": {\n \"properties\": {\n \"message\": {\n \"type\": \"text\"\n }\n }\n }\n }\n }\n}\n```\n\nThe `message` field is still analyzed. Why?\n", "comments": [ { "body": "Hi @olivierlambert it is unfortunate that we accept this mapping without barfing. `text` means analyzed text in elasticsearch 5.0. `index` became a [boolean property](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-index.html), that is why setting it to `not_analyzed` doesn't have any effect. If you want to have a non analyzed field, you need to use [`\"type\": \"keyword\"`](https://www.elastic.co/guide/en/elasticsearch/reference/current/keyword.html).\n", "created_at": "2016-10-26T20:28:27Z" }, { "body": "This looks like a side-effect of the fact that we wanted to make the migration easier by interpreting `analyzed` and `not_analyzed` as `index: true`, and `no` as `index: false`. For the new `text` and `keyword` types, this looks more error-prone than helpful however.\n", "created_at": "2016-10-27T07:07:05Z" }, { "body": "Thanks lads for your input :) I let you decide what to do with this issue :+1: \n", "created_at": "2016-10-27T07:24:37Z" }, { "body": "I digged this with @johtani and templates are relying on this behaviour so that templates created on 2.x keep working in 5.0, even if they generate mappings for string fields. Making mappings strict while keeping the feature in templates seems hard so we are leaning towards keeping things as they are today.\n", "created_at": "2016-10-31T13:32:01Z" }, { "body": "I am also facing the same issue. My index is already created. so how to change that field from text type to keyword type.", "created_at": "2018-05-11T10:43:54Z" }, { "body": "I am also facing the same issue . @prashuMishra Do you have a solution ?", "created_at": "2019-06-04T09:15:53Z" }, { "body": "This can't be changed on an existing index, you would need to create a new index with correct mappings, and reindex to it, possibly via the reindex API.", "created_at": "2019-06-05T12:57:23Z" } ], "number": 21134, "title": "Mapping with Index not_analyzed is not working" }
{ "body": "Closes #21134\n", "number": 21175, "review_comments": [], "title": "Reject legacy `index` values for `text` and `keyword`." }
{ "commits": [ { "message": "Reject legacy `index` values for `text` and `keyword`.\n\nCloses #21134" } ], "files": [ { "diff": "@@ -138,6 +138,23 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n return new StringFieldMapper.TypeParser().parse(name, node, parserContext);\n }\n KeywordFieldMapper.Builder builder = new KeywordFieldMapper.Builder(name);\n+\n+ // parse the index property explicitly, otherwise we fall back to the default impl that still accepts\n+ // analyzed and not_analyzed, which does not make sense for keyword fields\n+ Object index = node.remove(\"index\");\n+ if (index != null) {\n+ switch (index.toString()) {\n+ case \"true\":\n+ builder.index(true);\n+ break;\n+ case \"false\":\n+ builder.index(false);\n+ break;\n+ default:\n+ throw new IllegalArgumentException(\"Can't parse [index] value [\" + index + \"] for field [\" + name + \"], expected [true] or [false]\");\n+ }\n+ }\n+\n parseField(builder, name, node, parserContext);\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();", "filename": "core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java", "status": "modified" }, { "diff": "@@ -181,6 +181,23 @@ public Mapper.Builder parse(String fieldName, Map<String, Object> node, ParserCo\n builder.fieldType().setIndexAnalyzer(parserContext.getIndexAnalyzers().getDefaultIndexAnalyzer());\n builder.fieldType().setSearchAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchAnalyzer());\n builder.fieldType().setSearchQuoteAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchQuoteAnalyzer());\n+\n+ // parse the index property explicitly, otherwise we fall back to the default impl that still accepts\n+ // analyzed and not_analyzed, which does not make sense for text fields\n+ Object index = node.remove(\"index\");\n+ if (index != null) {\n+ switch (index.toString()) {\n+ case \"true\":\n+ builder.index(true);\n+ break;\n+ case \"false\":\n+ builder.index(false);\n+ break;\n+ default:\n+ throw new IllegalArgumentException(\"Can't parse [index] value [\" + index + \"] for field [\" + fieldName + \"], expected [true] or [false]\");\n+ }\n+ }\n+\n parseTextField(builder, fieldName, node, parserContext);\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();", "filename": "core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java", "status": "modified" }, { "diff": "@@ -354,4 +354,21 @@ public void testEmptyName() throws IOException {\n .endObject().endObject().string();\n assertEquals(downgradedMapping, defaultMapper.mappingSource().string());\n }\n+\n+ public void testRejectLegacyIndexValues() throws IOException {\n+ for (String index : new String[] {\"no\", \"not_analyzed\", \"analyzed\"}) {\n+ String mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"keyword\")\n+ .field(\"index\", index)\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> parser.parse(\"type\", new CompressedXContent(mapping)));\n+ assertThat(e.getMessage(), containsString(\"Can't parse [index] value [\" + index + \"] for field [foo], expected [true] or [false]\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/KeywordFieldMapperTests.java", "status": "modified" }, { "diff": "@@ -592,4 +592,21 @@ public void testEmptyName() throws IOException {\n .endObject().endObject().string();\n assertEquals(downgradedMapping, defaultMapper.mappingSource().string());\n }\n+\n+ public void testRejectLegacyIndexValues() throws IOException {\n+ for (String index : new String[] {\"no\", \"not_analyzed\", \"analyzed\"}) {\n+ String mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"text\")\n+ .field(\"index\", index)\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> parser.parse(\"type\", new CompressedXContent(mapping)));\n+ assertThat(e.getMessage(), containsString(\"Can't parse [index] value [\" + index + \"] for field [foo], expected [true] or [false]\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/TextFieldMapperTests.java", "status": "modified" } ] }
{ "body": "The integration test in qa/backwards-5.0 tries to start a node on the current version and BWC version. When setting this to 5.0.0, the test suite does not correctly start a node on 5.0.0 and a node on 5.0.1-SNAPSHOT; instead it starts two nodes on 5.0.0.\n", "comments": [ { "body": "Closed by #21145\n", "created_at": "2016-10-27T11:45:11Z" } ], "number": 21142, "title": "Backwards tests do not start BWC nodes with the right version" }
{ "body": "This fixes our cluster formation task to run REST tests against a mixed version cluster.\nYet, due to some limitations in our test framework `indices.rollover` tests are currently\ndisabled for the BWC case since they select the current master as the merge node which\nhappens to be a BWC node and we can't relocate all shards to it since the primaries are on\na higher version node. This will be fixed in a followup.\n\nCloses #21142\n", "number": 21145, "review_comments": [ { "body": "Aha, there it is!\n", "created_at": "2016-10-27T11:33:14Z" }, { "body": ";)\n", "created_at": "2016-10-27T11:38:34Z" } ], "title": "Fix bwc cluster formation in order to run BWC tests against a mixed version cluster" }
{ "commits": [ { "message": "Fix bwc cluster formation in order to run BWC tests against a mixed version cluster\n\nThis fixes our cluster formation task to run REST tests against a mixed version cluster.\nYet, due to some limitations in our test framework `indices.rollover` tests are currently\ndisabled for the BWC case since they select the current master as the merge node which\nhappens to be a BWC node and we can't relocate all shards to it since the primaries are on\na higher version node. This will be fixed in a followup.\n\nCloses #21142" } ], "files": [ { "diff": "@@ -73,8 +73,8 @@ class ClusterFormationTasks {\n }\n // this is our current version distribution configuration we use for all kinds of REST tests etc.\n String distroConfigName = \"${task.name}_elasticsearchDistro\"\n- Configuration distro = project.configurations.create(distroConfigName)\n- configureDistributionDependency(project, config.distribution, distro, VersionProperties.elasticsearch)\n+ Configuration currentDistro = project.configurations.create(distroConfigName)\n+ configureDistributionDependency(project, config.distribution, currentDistro, VersionProperties.elasticsearch)\n if (config.bwcVersion != null && config.numBwcNodes > 0) {\n // if we have a cluster that has a BWC cluster we also need to configure a dependency on the BWC version\n // this version uses the same distribution etc. and only differs in the version we depend on.\n@@ -85,11 +85,11 @@ class ClusterFormationTasks {\n }\n configureDistributionDependency(project, config.distribution, project.configurations.elasticsearchBwcDistro, config.bwcVersion)\n }\n-\n- for (int i = 0; i < config.numNodes; ++i) {\n+ for (int i = 0; i < config.numNodes; i++) {\n // we start N nodes and out of these N nodes there might be M bwc nodes.\n // for each of those nodes we might have a different configuratioon\n String elasticsearchVersion = VersionProperties.elasticsearch\n+ Configuration distro = currentDistro\n if (i < config.numBwcNodes) {\n elasticsearchVersion = config.bwcVersion\n distro = project.configurations.elasticsearchBwcDistro\n@@ -252,7 +252,7 @@ class ClusterFormationTasks {\n 'path.repo' : \"${node.sharedDir}/repo\",\n 'path.shared_data' : \"${node.sharedDir}/\",\n // Define a node attribute so we can test that it exists\n- 'node.attr.testattr' : 'test',\n+ 'node.attr.testattr' : 'test',\n 'repositories.url.allowed_urls': 'http://snapshot.test*'\n ]\n esConfig['node.max_local_storage_nodes'] = node.config.numNodes", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy", "status": "modified" }, { "diff": "@@ -18,6 +18,6 @@ integTest {\n cluster {\n numNodes = 2\n numBwcNodes = 1\n- bwcVersion = \"5.0.1-SNAPSHOT\"\n+ bwcVersion = \"5.0.0\"\n }\n }", "filename": "qa/backwards-5.0/build.gradle", "status": "modified" }, { "diff": "@@ -1,5 +1,9 @@\n ---\n-\"Force version\":\n+\"Force Version\":\n+\n+ - skip:\n+ version: \" - 5.0.0\"\n+ reason: headers were introduced in 5.0.1\n \n - do:\n warnings:", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/delete/27_force_version.yaml", "status": "modified" }, { "diff": "@@ -87,35 +87,3 @@\n version: 1\n version_type: external_gte\n \n- - do:\n- warnings:\n- - version type FORCE is deprecated and will be removed in the next major version\n- get:\n- index: test_1\n- type: test\n- id: 1\n- version: 2\n- version_type: force\n- - match: { _id: \"1\" }\n-\n- - do:\n- warnings:\n- - version type FORCE is deprecated and will be removed in the next major version\n- get:\n- index: test_1\n- type: test\n- id: 1\n- version: 10\n- version_type: force\n- - match: { _id: \"1\" }\n-\n- - do:\n- warnings:\n- - version type FORCE is deprecated and will be removed in the next major version\n- get:\n- index: test_1\n- type: test\n- id: 1\n- version: 1\n- version_type: force\n- - match: { _id: \"1\" }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/get/90_versions.yaml", "status": "modified" }, { "diff": "@@ -0,0 +1,53 @@\n+---\n+\"Force Versions\":\n+ - skip:\n+ version: \" - 5.0.0\"\n+ reason: headers were introduced in 5.0.1\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 1\n+ body: { foo: bar }\n+ - match: { _version: 1}\n+\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 1\n+ body: { foo: bar }\n+ - match: { _version: 2}\n+\n+ - do:\n+ warnings:\n+ - version type FORCE is deprecated and will be removed in the next major version\n+ get:\n+ index: test_1\n+ type: test\n+ id: 1\n+ version: 2\n+ version_type: force\n+ - match: { _id: \"1\" }\n+\n+ - do:\n+ warnings:\n+ - version type FORCE is deprecated and will be removed in the next major version\n+ get:\n+ index: test_1\n+ type: test\n+ id: 1\n+ version: 10\n+ version_type: force\n+ - match: { _id: \"1\" }\n+\n+ - do:\n+ warnings:\n+ - version type FORCE is deprecated and will be removed in the next major version\n+ get:\n+ index: test_1\n+ type: test\n+ id: 1\n+ version: 1\n+ version_type: force\n+ - match: { _id: \"1\" }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/get/95_force_versions.yaml", "status": "added" }, { "diff": "@@ -1,5 +1,9 @@\n ---\n-\"Force version\":\n+\"Force Version\":\n+\n+ - skip:\n+ version: \" - 5.0.0\"\n+ reason: headers were introduced in 5.0.1\n \n - do:\n warnings:", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/index/37_force_version.yaml", "status": "modified" }, { "diff": "@@ -71,18 +71,34 @@\n - match: { hits.total: 1 }\n - match: { hits.hits.0._index: \"logs-000002\"}\n \n+---\n+\"Rollover no condition matched\":\n+ - skip:\n+ version: \" - 5.0.0\"\n+ reason: bug fixed in 5.0.1\n+\n+ # create index with alias\n+ - do:\n+ indices.create:\n+ index: logs-1\n+ wait_for_active_shards: 1\n+ body:\n+ aliases:\n+ logs_index: {}\n+ logs_search: {}\n+\n # run again and verify results without rolling over\n - do:\n indices.rollover:\n alias: \"logs_search\"\n wait_for_active_shards: 1\n body:\n conditions:\n- max_docs: 100\n+ max_docs: 1\n \n- - match: { old_index: logs-000002 }\n- - match: { new_index: logs-000003 }\n+ - match: { old_index: logs-1 }\n+ - match: { new_index: logs-000002 }\n - match: { rolled_over: false }\n - match: { dry_run: false }\n- - match: { conditions: { \"[max_docs: 100]\": false } }\n+ - match: { conditions: { \"[max_docs: 1]\": false } }\n ", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.rollover/10_basic.yaml", "status": "modified" }, { "diff": "@@ -1,5 +1,11 @@\n ---\n \"Shrink index via API\":\n+ - skip:\n+ version: \" - 5.0.0\"\n+ reason: this doesn't work yet with BWC tests since the master is from the old verion\n+ # TODO we need to fix this for BWC tests to make sure we get a node that all shards can allocate on\n+ # today if we run BWC tests and we select the master as a _shrink node but since primaries are allocated\n+ # on the newer version nodes this fails...\n # creates an index with one document.\n # relocates all it's shards to one node\n # shrinks it into a new index with a single shard", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.shrink/10_basic.yaml", "status": "modified" }, { "diff": "@@ -86,36 +86,3 @@\n id: 1\n version: 1\n version_type: external_gte\n-\n- - do:\n- warnings:\n- - version type FORCE is deprecated and will be removed in the next major version\n- get:\n- index: test_1\n- type: test\n- id: 1\n- version: 2\n- version_type: force\n- - match: { _id: \"1\" }\n-\n- - do:\n- warnings:\n- - version type FORCE is deprecated and will be removed in the next major version\n- get:\n- index: test_1\n- type: test\n- id: 1\n- version: 10\n- version_type: force\n- - match: { _id: \"1\" }\n-\n- - do:\n- warnings:\n- - version type FORCE is deprecated and will be removed in the next major version\n- get:\n- index: test_1\n- type: test\n- id: 1\n- version: 1\n- version_type: force\n- - match: { _id: \"1\" }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/termvectors/40_versions.yaml", "status": "modified" }, { "diff": "@@ -0,0 +1,54 @@\n+---\n+\"Force Version\":\n+ - skip:\n+ version: \" - 5.0.0\"\n+ reason: headers were introduced in 5.0.1\n+\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 1\n+ body: { foo: bar }\n+ - match: { _version: 1}\n+\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 1\n+ body: { foo: bar }\n+ - match: { _version: 2}\n+\n+ - do:\n+ warnings:\n+ - version type FORCE is deprecated and will be removed in the next major version\n+ get:\n+ index: test_1\n+ type: test\n+ id: 1\n+ version: 2\n+ version_type: force\n+ - match: { _id: \"1\" }\n+\n+ - do:\n+ warnings:\n+ - version type FORCE is deprecated and will be removed in the next major version\n+ get:\n+ index: test_1\n+ type: test\n+ id: 1\n+ version: 10\n+ version_type: force\n+ - match: { _id: \"1\" }\n+\n+ - do:\n+ warnings:\n+ - version type FORCE is deprecated and will be removed in the next major version\n+ get:\n+ index: test_1\n+ type: test\n+ id: 1\n+ version: 1\n+ version_type: force\n+ - match: { _id: \"1\" }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/termvectors/45_force_versions.yaml", "status": "added" }, { "diff": "@@ -36,16 +36,17 @@ public ClientYamlTestSection parse(ClientYamlTestSuiteParseContext parseContext)\n try {\n parser.nextToken();\n testSection.setSkipSection(parseContext.parseSkipSection());\n- \n+\n while ( parser.currentToken() != XContentParser.Token.END_ARRAY) {\n parseContext.advanceToFieldName();\n testSection.addExecutableSection(parseContext.parseExecutableSection());\n }\n- \n+\n parser.nextToken();\n- assert parser.currentToken() == XContentParser.Token.END_OBJECT;\n+ assert parser.currentToken() == XContentParser.Token.END_OBJECT : \"malformed section [\" + testSection.getName() + \"] expected \"\n+ + XContentParser.Token.END_OBJECT + \" but was \" + parser.currentToken();\n parser.nextToken();\n- \n+\n return testSection;\n } catch (Exception e) {\n throw new ClientYamlTestParseException(\"Error parsing test named [\" + testSection.getName() + \"]\", e);", "filename": "test/framework/src/main/java/org/elasticsearch/test/rest/yaml/parser/ClientYamlTestSectionParser.java", "status": "modified" } ] }
{ "body": "The snapshot restore state tracks information about shards being restored from a snapshot in the cluster state. For example it records if a shard has been successfully restored or if restoring it was not possible due to a corruption of the snapshot. Recording these events is usually based on changes to the shard routing table, i.e., when a shard is started after a successful restore or failed after an unsuccessful one. As of now, there were two communication channels to transmit recovery failure / success to update the routing table and the restore state. This lead to issues where a shard was failed but the restore state was not updated due to connection issues between data and master node. In some rare situations, this lead to an issue where the restore state could not be properly cleaned up anymore by the master, making it impossible to start new restore operations. The following change updates routing table and restore state in the same cluster state update so that both always stay in sync. It also eliminates the extra communication channel for restore operations and uses the standard cluster state listener mechanism to update restore listener upon successful completion of a snapshot restore.\n\nCloses #19774\n", "comments": [ { "body": "thanks for the review @imotov \n", "created_at": "2016-10-12T07:09:07Z" }, { "body": "Is there any hope of this being backported to 1.7 version?", "created_at": "2017-08-31T03:12:27Z" }, { "body": "No, sorry, 1.7 is end-of-life.", "created_at": "2017-08-31T03:13:45Z" } ], "number": 20836, "title": "Keep snapshot restore state and routing table in sync" }
{ "body": "Backport of #20836 to 5.x.\n\nCommit b47ff92 implements the BWC layer for 5.0 / 5.x clusters.\n\nI have marked this PR as stalled as I'm unable to run the BWC tests.\n", "number": 21131, "review_comments": [ { "body": "10000000L seems arbitrary, maybe `Long.MAX_VALUE` would better reflect what you are trying to achieve here? Or maybe we can make `-1` to work as `unlimited` or check for `unlimited` string constant. \n", "created_at": "2016-11-03T23:16:23Z" }, { "body": "Hmm, I wonder if we can come up with a higher level description here. Can you think of any simple scenario when this could happen (besides somebody deleting an index while snapshot was running and creating a new one)?\n", "created_at": "2016-11-03T23:24:36Z" }, { "body": "💯 \n", "created_at": "2016-11-04T17:09:51Z" }, { "body": "The specific scenario I had in mind was when an empty primary is forced (using reroute commands). I will add a comment to the code.\n", "created_at": "2016-11-04T17:12:42Z" } ], "title": "Keep snapshot restore state and routing table in sync (5.x backport)" }
{ "commits": [ { "message": "Keep snapshot restore state and routing table in sync (#20836)\n\nThe snapshot restore state tracks information about shards being restored from a snapshot in the cluster state. For example it records if a shard has been successfully restored or if restoring it was not possible due to a corruption of the snapshot. Recording these events is usually based on changes to the shard routing table, i.e., when a shard is started after a successful restore or failed after an unsuccessful one. As of now, there were two communication channels to transmit recovery failure / success to update the routing table and the restore state. This lead to issues where a shard was failed but the restore state was not updated due to connection issues between data and master node. In some rare situations, this lead to an issue where the restore state could not be properly cleaned up anymore by the master, making it impossible to start new restore operations. The following change updates routing table and restore state in the same cluster state update so that both always stay in sync. It also eliminates the extra communication channel for restore operations and uses standard cluster state listener mechanism to update restore listener upon successful\ncompletion of a snapshot." }, { "message": "Increase number of allowed failures in MockRepository for snapshot restore test\n\nThe test testDataFileCorruptionDuringRestore expects failures to happen when accessing snapshot data. It would sometimes\nfail however as MockRepository (by default) only simulates 100 failures." }, { "message": "Add BWC layer for mixed 5.0 / 5.1+ version clusters" }, { "message": "Resync restore state with routing table upon master failover" } ], "files": [ { "diff": "@@ -22,19 +22,27 @@\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.master.TransportMasterNodeAction;\n+import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.ClusterStateListener;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.block.ClusterBlockException;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.snapshots.RestoreInfo;\n import org.elasticsearch.snapshots.RestoreService;\n+import org.elasticsearch.snapshots.RestoreService.RestoreCompletionResponse;\n import org.elasticsearch.snapshots.Snapshot;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n+import static org.elasticsearch.snapshots.RestoreService.restoreInProgress;\n+\n /**\n * Transport action for restore snapshot operation\n */\n@@ -78,28 +86,44 @@ protected void masterOperation(final RestoreSnapshotRequest request, final Clust\n request.settings(), request.masterNodeTimeout(), request.includeGlobalState(), request.partial(), request.includeAliases(),\n request.indexSettings(), request.ignoreIndexSettings(), \"restore_snapshot[\" + request.snapshot() + \"]\");\n \n- restoreService.restoreSnapshot(restoreRequest, new ActionListener<RestoreInfo>() {\n+ restoreService.restoreSnapshot(restoreRequest, new ActionListener<RestoreCompletionResponse>() {\n @Override\n- public void onResponse(RestoreInfo restoreInfo) {\n- if (restoreInfo == null && request.waitForCompletion()) {\n- restoreService.addListener(new ActionListener<RestoreService.RestoreCompletionResponse>() {\n+ public void onResponse(RestoreCompletionResponse restoreCompletionResponse) {\n+ if (restoreCompletionResponse.getRestoreInfo() == null && request.waitForCompletion()) {\n+ final Snapshot snapshot = restoreCompletionResponse.getSnapshot();\n+\n+ ClusterStateListener clusterStateListener = new ClusterStateListener() {\n @Override\n- public void onResponse(RestoreService.RestoreCompletionResponse restoreCompletionResponse) {\n- final Snapshot snapshot = restoreCompletionResponse.getSnapshot();\n- if (snapshot.getRepository().equals(request.repository()) &&\n- snapshot.getSnapshotId().getName().equals(request.snapshot())) {\n- listener.onResponse(new RestoreSnapshotResponse(restoreCompletionResponse.getRestoreInfo()));\n- restoreService.removeListener(this);\n+ public void clusterChanged(ClusterChangedEvent changedEvent) {\n+ final RestoreInProgress.Entry prevEntry = restoreInProgress(changedEvent.previousState(), snapshot);\n+ final RestoreInProgress.Entry newEntry = restoreInProgress(changedEvent.state(), snapshot);\n+ if (prevEntry == null) {\n+ // When there is a master failure after a restore has been started, this listener might not be registered\n+ // on the current master and as such it might miss some intermediary cluster states due to batching.\n+ // Clean up listener in that case and acknowledge completion of restore operation to client.\n+ clusterService.remove(this);\n+ listener.onResponse(new RestoreSnapshotResponse(null));\n+ } else if (newEntry == null) {\n+ clusterService.remove(this);\n+ ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards = prevEntry.shards();\n+ assert prevEntry.state().completed() : \"expected completed snapshot state but was \" + prevEntry.state();\n+ assert RestoreService.completed(shards) : \"expected all restore entries to be completed\";\n+ RestoreInfo ri = new RestoreInfo(prevEntry.snapshot().getSnapshotId().getName(),\n+ prevEntry.indices(),\n+ shards.size(),\n+ shards.size() - RestoreService.failedShards(shards));\n+ RestoreSnapshotResponse response = new RestoreSnapshotResponse(ri);\n+ logger.debug(\"restore of [{}] completed\", snapshot);\n+ listener.onResponse(response);\n+ } else {\n+ // restore not completed yet, wait for next cluster state update\n }\n }\n+ };\n \n- @Override\n- public void onFailure(Exception e) {\n- listener.onFailure(e);\n- }\n- });\n+ clusterService.addLast(clusterStateListener);\n } else {\n- listener.onResponse(new RestoreSnapshotResponse(restoreInfo));\n+ listener.onResponse(new RestoreSnapshotResponse(restoreCompletionResponse.getRestoreInfo()));\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java", "status": "modified" }, { "diff": "@@ -23,20 +23,23 @@\n import org.elasticsearch.action.admin.indices.delete.DeleteIndexClusterStateUpdateRequest;\n import org.elasticsearch.cluster.AckedClusterStateUpdateTask;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.index.Index;\n+import org.elasticsearch.snapshots.RestoreService;\n import org.elasticsearch.snapshots.SnapshotsService;\n \n import java.util.Arrays;\n-import java.util.Collection;\n import java.util.Set;\n \n import static java.util.stream.Collectors.toSet;\n@@ -73,15 +76,15 @@ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n \n @Override\n public ClusterState execute(final ClusterState currentState) {\n- return deleteIndices(currentState, Arrays.asList(request.indices()));\n+ return deleteIndices(currentState, Sets.newHashSet(request.indices()));\n }\n });\n }\n \n /**\n * Delete some indices from the cluster state.\n */\n- public ClusterState deleteIndices(ClusterState currentState, Collection<Index> indices) {\n+ public ClusterState deleteIndices(ClusterState currentState, Set<Index> indices) {\n final MetaData meta = currentState.metaData();\n final Set<IndexMetaData> metaDatas = indices.stream().map(i -> meta.getIndexSafe(i)).collect(toSet());\n // Check if index deletion conflicts with any running snapshots\n@@ -107,11 +110,25 @@ public ClusterState deleteIndices(ClusterState currentState, Collection<Index> i\n \n MetaData newMetaData = metaDataBuilder.build();\n ClusterBlocks blocks = clusterBlocksBuilder.build();\n+\n+ // update snapshot restore entries\n+ ImmutableOpenMap<String, ClusterState.Custom> customs = currentState.getCustoms();\n+ final RestoreInProgress restoreInProgress = currentState.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null) {\n+ RestoreInProgress updatedRestoreInProgress = RestoreService.updateRestoreStateWithDeletedIndices(restoreInProgress, indices);\n+ if (updatedRestoreInProgress != restoreInProgress) {\n+ ImmutableOpenMap.Builder<String, ClusterState.Custom> builder = ImmutableOpenMap.builder(customs);\n+ builder.put(RestoreInProgress.TYPE, updatedRestoreInProgress);\n+ customs = builder.build();\n+ }\n+ }\n+\n return allocationService.reroute(\n ClusterState.builder(currentState)\n .routingTable(routingTableBuilder.build())\n .metaData(newMetaData)\n .blocks(blocks)\n+ .customs(customs)\n .build(),\n \"deleted indices [\" + indices + \"]\");\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataDeleteIndexService.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.cluster.ClusterInfoService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.cluster.health.ClusterStateHealth;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -34,6 +35,7 @@\n import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocator;\n import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands;\n import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -94,17 +96,24 @@ public ClusterState applyStartedShards(ClusterState clusterState, List<ShardRout\n }\n \n protected ClusterState buildResultAndLogHealthChange(ClusterState oldState, RoutingAllocation allocation, String reason) {\n- return buildResultAndLogHealthChange(oldState, allocation, reason, new RoutingExplanations());\n- }\n-\n- protected ClusterState buildResultAndLogHealthChange(ClusterState oldState, RoutingAllocation allocation, String reason,\n- RoutingExplanations explanations) {\n RoutingTable oldRoutingTable = oldState.routingTable();\n RoutingNodes newRoutingNodes = allocation.routingNodes();\n final RoutingTable newRoutingTable = new RoutingTable.Builder().updateNodes(oldRoutingTable.version(), newRoutingNodes).build();\n MetaData newMetaData = allocation.updateMetaDataWithRoutingChanges(newRoutingTable);\n assert newRoutingTable.validate(newMetaData); // validates the routing table is coherent with the cluster state metadata\n- final ClusterState newState = ClusterState.builder(oldState).routingTable(newRoutingTable).metaData(newMetaData).build();\n+ final ClusterState.Builder newStateBuilder = ClusterState.builder(oldState)\n+ .routingTable(newRoutingTable)\n+ .metaData(newMetaData);\n+ final RestoreInProgress restoreInProgress = allocation.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null) {\n+ RestoreInProgress updatedRestoreInProgress = allocation.updateRestoreInfoWithRoutingChanges(restoreInProgress);\n+ if (updatedRestoreInProgress != restoreInProgress) {\n+ ImmutableOpenMap.Builder<String, ClusterState.Custom> customsBuilder = ImmutableOpenMap.builder(allocation.getCustoms());\n+ customsBuilder.put(RestoreInProgress.TYPE, updatedRestoreInProgress);\n+ newStateBuilder.customs(customsBuilder.build());\n+ }\n+ }\n+ final ClusterState newState = newStateBuilder.build();\n logClusterHealthStateChange(\n new ClusterStateHealth(oldState),\n new ClusterStateHealth(newState),", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.cluster.ClusterInfo;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.RoutingChangesObserver;\n@@ -30,6 +31,8 @@\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.snapshots.RestoreService;\n+import org.elasticsearch.snapshots.RestoreService.RestoreInProgressUpdater;\n \n import java.util.HashMap;\n import java.util.HashSet;\n@@ -76,8 +79,9 @@ public class RoutingAllocation {\n \n private final IndexMetaDataUpdater indexMetaDataUpdater = new IndexMetaDataUpdater();\n private final RoutingNodesChangedObserver nodesChangedObserver = new RoutingNodesChangedObserver();\n+ private final RestoreInProgressUpdater restoreInProgressUpdater = new RestoreInProgressUpdater();\n private final RoutingChangesObserver routingChangesObserver = new RoutingChangesObserver.DelegatingRoutingChangesObserver(\n- nodesChangedObserver, indexMetaDataUpdater\n+ nodesChangedObserver, indexMetaDataUpdater, restoreInProgressUpdater\n );\n \n \n@@ -154,6 +158,10 @@ public <T extends ClusterState.Custom> T custom(String key) {\n return (T)customs.get(key);\n }\n \n+ public ImmutableOpenMap<String, ClusterState.Custom> getCustoms() {\n+ return customs;\n+ }\n+\n /**\n * Get explanations of current routing\n * @return explanation of routing\n@@ -234,6 +242,13 @@ public MetaData updateMetaDataWithRoutingChanges(RoutingTable newRoutingTable) {\n return indexMetaDataUpdater.applyChanges(metaData, newRoutingTable);\n }\n \n+ /**\n+ * Returns updated {@link RestoreInProgress} based on the changes that were made to the routing nodes\n+ */\n+ public RestoreInProgress updateRestoreInfoWithRoutingChanges(RestoreInProgress restoreInProgress) {\n+ return restoreInProgressUpdater.applyChanges(restoreInProgress);\n+ }\n+\n /**\n * Returns true iff changes were made to the routing nodes\n */", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java", "status": "modified" }, { "diff": "@@ -573,8 +573,7 @@ private void updateShard(DiscoveryNodes nodes, ShardRouting shardRouting, Shard\n \n /**\n * Finds the routing source node for peer recovery, return null if its not found. Note, this method expects the shard\n- * routing to *require* peer recovery, use {@link ShardRouting#recoverySource()} to\n- * check if its needed or not.\n+ * routing to *require* peer recovery, use {@link ShardRouting#recoverySource()} to check if its needed or not.\n */\n private static DiscoveryNode findSourceNodeForPeerRecovery(Logger logger, RoutingTable routingTable, DiscoveryNodes nodes,\n ShardRouting shardRouting) {", "filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.hppc.IntSet;\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n+import org.apache.logging.log4j.Logger;\n import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.apache.logging.log4j.util.Supplier;\n import org.elasticsearch.Version;\n@@ -30,6 +31,9 @@\n import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ClusterStateListener;\n+import org.elasticsearch.cluster.ClusterStateTaskConfig;\n+import org.elasticsearch.cluster.ClusterStateTaskExecutor;\n+import org.elasticsearch.cluster.ClusterStateTaskListener;\n import org.elasticsearch.cluster.ClusterStateUpdateTask;\n import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.RestoreInProgress.ShardRestoreStatus;\n@@ -41,26 +45,29 @@\n import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService;\n import org.elasticsearch.cluster.metadata.RepositoriesMetaData;\n-import org.elasticsearch.cluster.routing.IndexRoutingTable;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n+import org.elasticsearch.cluster.routing.RecoverySource;\n import org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource;\n+import org.elasticsearch.cluster.routing.RoutingChangesObserver;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.service.ClusterService;\n-import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n-import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.ClusterSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n@@ -83,12 +90,10 @@\n import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n-import java.util.Map.Entry;\n import java.util.Objects;\n import java.util.Optional;\n import java.util.Set;\n-import java.util.concurrent.BlockingQueue;\n-import java.util.concurrent.CopyOnWriteArrayList;\n+import java.util.stream.Collectors;\n \n import static java.util.Collections.unmodifiableSet;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS;\n@@ -117,8 +122,8 @@\n * method, which detects that shard should be restored from snapshot rather than recovered from gateway by looking\n * at the {@link ShardRouting#recoverySource()} property.\n * <p>\n- * At the end of the successful restore process {@code IndexShardSnapshotAndRestoreService} calls {@link #indexShardRestoreCompleted(Snapshot, ShardId)},\n- * which updates {@link RestoreInProgress} in cluster state or removes it when all shards are completed. In case of\n+ * At the end of the successful restore process {@code RestoreService} calls {@link #cleanupRestoreState(ClusterChangedEvent)},\n+ * which removes {@link RestoreInProgress} when all shards are completed. In case of\n * restore failure a normal recovery fail-over process kicks in.\n */\n public class RestoreService extends AbstractComponent implements ClusterStateListener {\n@@ -156,11 +161,10 @@ public class RestoreService extends AbstractComponent implements ClusterStateLis\n \n private final MetaDataIndexUpgradeService metaDataIndexUpgradeService;\n \n- private final CopyOnWriteArrayList<ActionListener<RestoreCompletionResponse>> listeners = new CopyOnWriteArrayList<>();\n-\n- private final BlockingQueue<UpdateIndexShardRestoreStatusRequest> updatedSnapshotStateQueue = ConcurrentCollections.newBlockingQueue();\n private final ClusterSettings clusterSettings;\n \n+ private final CleanRestoreStateTaskExecutor cleanRestoreStateTaskExecutor;\n+\n @Inject\n public RestoreService(Settings settings, ClusterService clusterService, RepositoriesService repositoriesService, TransportService transportService,\n AllocationService allocationService, MetaDataCreateIndexService createIndexService,\n@@ -175,6 +179,7 @@ public RestoreService(Settings settings, ClusterService clusterService, Reposito\n transportService.registerRequestHandler(UPDATE_RESTORE_ACTION_NAME, UpdateIndexShardRestoreStatusRequest::new, ThreadPool.Names.SAME, new UpdateRestoreStateRequestHandler());\n clusterService.add(this);\n this.clusterSettings = clusterSettings;\n+ this.cleanRestoreStateTaskExecutor = new CleanRestoreStateTaskExecutor(logger);\n }\n \n /**\n@@ -183,7 +188,7 @@ public RestoreService(Settings settings, ClusterService clusterService, Reposito\n * @param request restore request\n * @param listener restore listener\n */\n- public void restoreSnapshot(final RestoreRequest request, final ActionListener<RestoreInfo> listener) {\n+ public void restoreSnapshot(final RestoreRequest request, final ActionListener<RestoreCompletionResponse> listener) {\n try {\n // Read snapshot info and metadata from the repository\n Repository repository = repositoriesService.repository(request.repositoryName);\n@@ -314,7 +319,7 @@ public ClusterState execute(ClusterState currentState) {\n }\n \n shards = shardsBuilder.build();\n- RestoreInProgress.Entry restoreEntry = new RestoreInProgress.Entry(snapshot, RestoreInProgress.State.INIT, Collections.unmodifiableList(new ArrayList<>(renamedIndices.keySet())), shards);\n+ RestoreInProgress.Entry restoreEntry = new RestoreInProgress.Entry(snapshot, overallState(RestoreInProgress.State.INIT, shards), Collections.unmodifiableList(new ArrayList<>(renamedIndices.keySet())), shards);\n builder.putCustom(RestoreInProgress.TYPE, new RestoreInProgress(restoreEntry));\n } else {\n shards = ImmutableOpenMap.of();\n@@ -469,7 +474,7 @@ public TimeValue timeout() {\n \n @Override\n public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n- listener.onResponse(restoreInfo);\n+ listener.onResponse(new RestoreCompletionResponse(snapshot, restoreInfo));\n }\n });\n \n@@ -480,19 +485,33 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n }\n }\n \n- /**\n- * This method is used by {@link IndexShard} to notify\n- * {@code RestoreService} about shard restore completion.\n- *\n- * @param snapshot snapshot\n- * @param shardId shard id\n- */\n- public void indexShardRestoreCompleted(Snapshot snapshot, ShardId shardId) {\n- logger.trace(\"[{}] successfully restored shard [{}]\", snapshot, shardId);\n- UpdateIndexShardRestoreStatusRequest request = new UpdateIndexShardRestoreStatusRequest(snapshot, shardId,\n- new ShardRestoreStatus(clusterService.state().nodes().getLocalNodeId(), RestoreInProgress.State.SUCCESS));\n- transportService.sendRequest(clusterService.state().nodes().getMasterNode(),\n- UPDATE_RESTORE_ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);\n+ public static RestoreInProgress updateRestoreStateWithDeletedIndices(RestoreInProgress oldRestore, Set<Index> deletedIndices) {\n+ boolean changesMade = false;\n+ final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n+ for (RestoreInProgress.Entry entry : oldRestore.entries()) {\n+ ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = null;\n+ for (ObjectObjectCursor<ShardId, ShardRestoreStatus> cursor : entry.shards()) {\n+ ShardId shardId = cursor.key;\n+ if (deletedIndices.contains(shardId.getIndex())) {\n+ changesMade = true;\n+ if (shardsBuilder == null) {\n+ shardsBuilder = ImmutableOpenMap.builder(entry.shards());\n+ }\n+ shardsBuilder.put(shardId, new ShardRestoreStatus(null, RestoreInProgress.State.FAILURE, \"index was deleted\"));\n+ }\n+ }\n+ if (shardsBuilder != null) {\n+ ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = shardsBuilder.build();\n+ entries.add(new RestoreInProgress.Entry(entry.snapshot(), overallState(RestoreInProgress.State.STARTED, shards), entry.indices(), shards));\n+ } else {\n+ entries.add(entry);\n+ }\n+ }\n+ if (changesMade) {\n+ return new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n+ } else {\n+ return oldRestore;\n+ }\n }\n \n public static final class RestoreCompletionResponse {\n@@ -513,168 +532,269 @@ public RestoreInfo getRestoreInfo() {\n }\n }\n \n- /**\n- * Updates shard restore record in the cluster state.\n- *\n- * @param request update shard status request\n- */\n- private void updateRestoreStateOnMaster(final UpdateIndexShardRestoreStatusRequest request) {\n- logger.trace(\"received updated snapshot restore state [{}]\", request);\n- updatedSnapshotStateQueue.add(request);\n-\n- clusterService.submitStateUpdateTask(\"update snapshot state\", new ClusterStateUpdateTask() {\n- private final List<UpdateIndexShardRestoreStatusRequest> drainedRequests = new ArrayList<>();\n- private Map<Snapshot, Tuple<RestoreInfo, ImmutableOpenMap<ShardId, ShardRestoreStatus>>> batchedRestoreInfo = null;\n+ public static class RestoreInProgressUpdater extends RoutingChangesObserver.AbstractRoutingChangesObserver {\n+ private final Map<Snapshot, Updates> shardChanges = new HashMap<>();\n \n- @Override\n- public ClusterState execute(ClusterState currentState) {\n+ @Override\n+ public void shardStarted(ShardRouting initializingShard, ShardRouting startedShard) {\n+ // mark snapshot as completed\n+ if (initializingShard.primary()) {\n+ RecoverySource recoverySource = initializingShard.recoverySource();\n+ if (recoverySource.getType() == RecoverySource.Type.SNAPSHOT) {\n+ Snapshot snapshot = ((SnapshotRecoverySource) recoverySource).snapshot();\n+ changes(snapshot).startedShards.put(initializingShard.shardId(),\n+ new ShardRestoreStatus(initializingShard.currentNodeId(), RestoreInProgress.State.SUCCESS));\n+ }\n+ }\n+ }\n \n- if (request.processed) {\n- return currentState;\n+ @Override\n+ public void shardFailed(ShardRouting failedShard, UnassignedInfo unassignedInfo) {\n+ if (failedShard.primary() && failedShard.initializing()) {\n+ RecoverySource recoverySource = failedShard.recoverySource();\n+ if (recoverySource.getType() == RecoverySource.Type.SNAPSHOT) {\n+ Snapshot snapshot = ((SnapshotRecoverySource) recoverySource).snapshot();\n+ // mark restore entry for this shard as failed when it's due to a file corruption. There is no need wait on retries\n+ // to restore this shard on another node if the snapshot files are corrupt. In case where a node just left or crashed,\n+ // however, we only want to acknowledge the restore operation once it has been successfully restored on another node.\n+ if (unassignedInfo.getFailure() != null && Lucene.isCorruptionException(unassignedInfo.getFailure().getCause())) {\n+ changes(snapshot).failedShards.put(failedShard.shardId(), new ShardRestoreStatus(failedShard.currentNodeId(),\n+ RestoreInProgress.State.FAILURE, unassignedInfo.getFailure().getCause().getMessage()));\n+ }\n }\n+ }\n+ }\n \n- updatedSnapshotStateQueue.drainTo(drainedRequests);\n+ @Override\n+ public void shardInitialized(ShardRouting unassignedShard, ShardRouting initializedShard) {\n+ // if we force an empty primary, we should also fail the restore entry\n+ if (unassignedShard.recoverySource().getType() == RecoverySource.Type.SNAPSHOT &&\n+ initializedShard.recoverySource().getType() != RecoverySource.Type.SNAPSHOT) {\n+ Snapshot snapshot = ((SnapshotRecoverySource) unassignedShard.recoverySource()).snapshot();\n+ changes(snapshot).failedShards.put(unassignedShard.shardId(), new ShardRestoreStatus(null,\n+ RestoreInProgress.State.FAILURE, \"recovery source type changed from snapshot to \" + initializedShard.recoverySource()));\n+ }\n+ }\n \n- final int batchSize = drainedRequests.size();\n+ /**\n+ * Helper method that creates update entry for the given shard id if such an entry does not exist yet.\n+ */\n+ private Updates changes(Snapshot snapshot) {\n+ return shardChanges.computeIfAbsent(snapshot, k -> new Updates());\n+ }\n \n- // nothing to process (a previous event has processed it already)\n- if (batchSize == 0) {\n- return currentState;\n- }\n+ private static class Updates {\n+ private Map<ShardId, ShardRestoreStatus> failedShards = new HashMap<>();\n+ private Map<ShardId, ShardRestoreStatus> startedShards = new HashMap<>();\n+ }\n \n- final RestoreInProgress restore = currentState.custom(RestoreInProgress.TYPE);\n- if (restore != null) {\n- int changedCount = 0;\n- final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n- for (RestoreInProgress.Entry entry : restore.entries()) {\n- ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = null;\n-\n- for (int i = 0; i < batchSize; i++) {\n- final UpdateIndexShardRestoreStatusRequest updateSnapshotState = drainedRequests.get(i);\n- updateSnapshotState.processed = true;\n-\n- if (entry.snapshot().equals(updateSnapshotState.snapshot())) {\n- logger.trace(\"[{}] Updating shard [{}] with status [{}]\", updateSnapshotState.snapshot(), updateSnapshotState.shardId(), updateSnapshotState.status().state());\n- if (shardsBuilder == null) {\n- shardsBuilder = ImmutableOpenMap.builder(entry.shards());\n- }\n- shardsBuilder.put(updateSnapshotState.shardId(), updateSnapshotState.status());\n- changedCount++;\n- }\n+ public RestoreInProgress applyChanges(RestoreInProgress oldRestore) {\n+ if (shardChanges.isEmpty() == false) {\n+ final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n+ for (RestoreInProgress.Entry entry : oldRestore.entries()) {\n+ Snapshot snapshot = entry.snapshot();\n+ Updates updates = shardChanges.get(snapshot);\n+ assert Sets.haveEmptyIntersection(updates.startedShards.keySet(), updates.failedShards.keySet());\n+ if (updates.startedShards.isEmpty() == false || updates.failedShards.isEmpty() == false) {\n+ ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = ImmutableOpenMap.builder(entry.shards());\n+ for (Map.Entry<ShardId, ShardRestoreStatus> startedShardEntry : updates.startedShards.entrySet()) {\n+ shardsBuilder.put(startedShardEntry.getKey(), startedShardEntry.getValue());\n }\n-\n- if (shardsBuilder != null) {\n- ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = shardsBuilder.build();\n- if (!completed(shards)) {\n- entries.add(new RestoreInProgress.Entry(entry.snapshot(), RestoreInProgress.State.STARTED, entry.indices(), shards));\n- } else {\n- logger.info(\"restore [{}] is done\", entry.snapshot());\n- if (batchedRestoreInfo == null) {\n- batchedRestoreInfo = new HashMap<>();\n- }\n- assert !batchedRestoreInfo.containsKey(entry.snapshot());\n- batchedRestoreInfo.put(entry.snapshot(),\n- new Tuple<>(\n- new RestoreInfo(entry.snapshot().getSnapshotId().getName(),\n- entry.indices(),\n- shards.size(),\n- shards.size() - failedShards(shards)),\n- shards));\n- }\n- } else {\n- entries.add(entry);\n+ for (Map.Entry<ShardId, ShardRestoreStatus> failedShardEntry : updates.failedShards.entrySet()) {\n+ shardsBuilder.put(failedShardEntry.getKey(), failedShardEntry.getValue());\n }\n+ ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = shardsBuilder.build();\n+ RestoreInProgress.State newState = overallState(RestoreInProgress.State.STARTED, shards);\n+ entries.add(new RestoreInProgress.Entry(entry.snapshot(), newState, entry.indices(), shards));\n+ } else {\n+ entries.add(entry);\n }\n+ }\n+ return new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n+ } else {\n+ return oldRestore;\n+ }\n+ }\n \n- if (changedCount > 0) {\n- logger.trace(\"changed cluster state triggered by {} snapshot restore state updates\", changedCount);\n+ }\n \n- final RestoreInProgress updatedRestore = new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n- return ClusterState.builder(currentState).putCustom(RestoreInProgress.TYPE, updatedRestore).build();\n- }\n+ public static RestoreInProgress.Entry restoreInProgress(ClusterState state, Snapshot snapshot) {\n+ final RestoreInProgress restoreInProgress = state.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null) {\n+ for (RestoreInProgress.Entry e : restoreInProgress.entries()) {\n+ if (e.snapshot().equals(snapshot)) {\n+ return e;\n }\n- return currentState;\n }\n+ }\n+ return null;\n+ }\n \n- @Override\n- public void onFailure(String source, @Nullable Exception e) {\n- for (UpdateIndexShardRestoreStatusRequest request : drainedRequests) {\n- logger.warn((Supplier<?>) () -> new ParameterizedMessage(\"[{}][{}] failed to update snapshot status to [{}]\", request.snapshot(), request.shardId(), request.status()), e);\n- }\n+ static class CleanRestoreStateTaskExecutor implements ClusterStateTaskExecutor<CleanRestoreStateTaskExecutor.Task>, ClusterStateTaskListener {\n+\n+ static class Task {\n+ final Snapshot snapshot;\n+\n+ Task(Snapshot snapshot) {\n+ this.snapshot = snapshot;\n }\n \n @Override\n- public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n- if (batchedRestoreInfo != null) {\n- for (final Entry<Snapshot, Tuple<RestoreInfo, ImmutableOpenMap<ShardId, ShardRestoreStatus>>> entry : batchedRestoreInfo.entrySet()) {\n- final Snapshot snapshot = entry.getKey();\n- final RestoreInfo restoreInfo = entry.getValue().v1();\n- final ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = entry.getValue().v2();\n- RoutingTable routingTable = newState.getRoutingTable();\n- final List<ShardId> waitForStarted = new ArrayList<>();\n- for (ObjectObjectCursor<ShardId, ShardRestoreStatus> shard : shards) {\n- if (shard.value.state() == RestoreInProgress.State.SUCCESS ) {\n- ShardId shardId = shard.key;\n- ShardRouting shardRouting = findPrimaryShard(routingTable, shardId);\n- if (shardRouting != null && !shardRouting.active()) {\n- logger.trace(\"[{}][{}] waiting for the shard to start\", snapshot, shardId);\n- waitForStarted.add(shardId);\n- }\n- }\n- }\n- if (waitForStarted.isEmpty()) {\n- notifyListeners(snapshot, restoreInfo);\n- } else {\n- clusterService.addLast(new ClusterStateListener() {\n- @Override\n- public void clusterChanged(ClusterChangedEvent event) {\n- if (event.routingTableChanged()) {\n- RoutingTable routingTable = event.state().getRoutingTable();\n- for (Iterator<ShardId> iterator = waitForStarted.iterator(); iterator.hasNext();) {\n- ShardId shardId = iterator.next();\n- ShardRouting shardRouting = findPrimaryShard(routingTable, shardId);\n- // Shard disappeared (index deleted) or became active\n- if (shardRouting == null || shardRouting.active()) {\n- iterator.remove();\n- logger.trace(\"[{}][{}] shard disappeared or started - removing\", snapshot, shardId);\n- }\n- }\n- }\n- if (waitForStarted.isEmpty()) {\n- notifyListeners(snapshot, restoreInfo);\n- clusterService.remove(this);\n- }\n- }\n- });\n- }\n+ public String toString() {\n+ return \"clean restore state for restoring snapshot \" + snapshot;\n+ }\n+ }\n+\n+ private final Logger logger;\n+\n+ public CleanRestoreStateTaskExecutor(Logger logger) {\n+ this.logger = logger;\n+ }\n+\n+ @Override\n+ public BatchResult<Task> execute(final ClusterState currentState, final List<Task> tasks) throws Exception {\n+ final BatchResult.Builder<Task> resultBuilder = BatchResult.<Task>builder().successes(tasks);\n+ Set<Snapshot> completedSnapshots = tasks.stream().map(e -> e.snapshot).collect(Collectors.toSet());\n+ final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n+ final RestoreInProgress restoreInProgress = currentState.custom(RestoreInProgress.TYPE);\n+ boolean changed = false;\n+ if (restoreInProgress != null) {\n+ for (RestoreInProgress.Entry entry : restoreInProgress.entries()) {\n+ if (completedSnapshots.contains(entry.snapshot()) == false) {\n+ entries.add(entry);\n+ } else {\n+ changed = true;\n }\n }\n }\n+ if (changed == false) {\n+ return resultBuilder.build(currentState);\n+ }\n+ RestoreInProgress updatedRestoreInProgress = new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n+ ImmutableOpenMap.Builder<String, ClusterState.Custom> builder = ImmutableOpenMap.builder(currentState.getCustoms());\n+ builder.put(RestoreInProgress.TYPE, updatedRestoreInProgress);\n+ ImmutableOpenMap<String, ClusterState.Custom> customs = builder.build();\n+ return resultBuilder.build(ClusterState.builder(currentState).customs(customs).build());\n+ }\n \n- private ShardRouting findPrimaryShard(RoutingTable routingTable, ShardId shardId) {\n- IndexRoutingTable indexRoutingTable = routingTable.index(shardId.getIndex());\n- if (indexRoutingTable != null) {\n- IndexShardRoutingTable indexShardRoutingTable = indexRoutingTable.shard(shardId.id());\n- if (indexShardRoutingTable != null) {\n- return indexShardRoutingTable.primaryShard();\n- }\n+ @Override\n+ public void onFailure(final String source, final Exception e) {\n+ logger.error((Supplier<?>) () -> new ParameterizedMessage(\"unexpected failure during [{}]\", source), e);\n+ }\n+\n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ logger.debug(\"no longer master while processing restore state update [{}]\", source);\n+ }\n+\n+ }\n+\n+ private void cleanupRestoreState(ClusterChangedEvent event) {\n+ ClusterState state = event.state();\n+\n+ RestoreInProgress restoreInProgress = state.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null) {\n+ for (RestoreInProgress.Entry entry : restoreInProgress.entries()) {\n+ if (entry.state().completed()) {\n+ assert completed(entry.shards()) : \"state says completed but restore entries are not\";\n+ clusterService.submitStateUpdateTask(\n+ \"clean up snapshot restore state\",\n+ new CleanRestoreStateTaskExecutor.Task(entry.snapshot()),\n+ ClusterStateTaskConfig.build(Priority.URGENT),\n+ cleanRestoreStateTaskExecutor,\n+ cleanRestoreStateTaskExecutor);\n }\n- return null;\n }\n \n- private void notifyListeners(Snapshot snapshot, RestoreInfo restoreInfo) {\n- for (ActionListener<RestoreCompletionResponse> listener : listeners) {\n- try {\n- listener.onResponse(new RestoreCompletionResponse(snapshot, restoreInfo));\n- } catch (Exception e) {\n- logger.warn((Supplier<?>) () -> new ParameterizedMessage(\"failed to update snapshot status for [{}]\", listener), e);\n+ if (event.localNodeMaster() && !event.previousState().nodes().isLocalNodeElectedMaster()) {\n+ // old master (before 5.1.0) might have failed to update RestoreInProgress after updating the routing table\n+ // try to reconcile routing table and RestoreInProgress here.\n+ clusterService.submitStateUpdateTask(\"update restore state after master switch\", new ClusterStateUpdateTask() {\n+ @Override\n+ public ClusterState execute(ClusterState currentState) throws Exception {\n+ return resyncRestoreInProgressWithRoutingTable(currentState);\n+ }\n+\n+ @Override\n+ public void onFailure(String source, Exception e) {\n+ logger.warn(\"failed to sync restore state after master switch\", e);\n+ }\n+ });\n+ }\n+ }\n+ }\n+\n+ private ClusterState resyncRestoreInProgressWithRoutingTable(ClusterState currentState) {\n+ RoutingTable routingTable = currentState.routingTable();\n+ RestoreInProgress restoreInProgress = currentState.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null) {\n+ final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n+ for (RestoreInProgress.Entry entry : restoreInProgress.entries()) {\n+ Snapshot snapshot = entry.snapshot();\n+ ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = ImmutableOpenMap.builder(entry.shards());\n+ for (ObjectObjectCursor<ShardId, ShardRestoreStatus> restoreEntry : entry.shards()) {\n+ ShardId shardId = restoreEntry.key;\n+ IndexShardRoutingTable indexShardRoutingTable = routingTable.shardRoutingTableOrNull(shardId);\n+ if (indexShardRoutingTable == null) {\n+ shardsBuilder.put(shardId, new ShardRestoreStatus(null, RestoreInProgress.State.FAILURE, \"index was deleted\"));\n+ } else {\n+ if (restoreEntry.value.state().completed() == false) {\n+ // must be INIT (restore state for ShardRestoreStatus is never STARTED)\n+ assert restoreEntry.value.state() == RestoreInProgress.State.INIT;\n+ ShardRouting primaryShard = indexShardRoutingTable.primaryShard();\n+ if (primaryShard.active()) {\n+ // assume shard was started after successful restore from snapshot\n+ shardsBuilder.put(shardId,\n+ new ShardRestoreStatus(primaryShard.currentNodeId(), RestoreInProgress.State.SUCCESS));\n+ } else {\n+ assert primaryShard.unassigned() || primaryShard.initializing();\n+ if (primaryShard.recoverySource().getType() == RecoverySource.Type.SNAPSHOT &&\n+ ((SnapshotRecoverySource) primaryShard.recoverySource()).snapshot().equals(snapshot)) {\n+ // if the primary is unassigned we assume that the restore wasn't started yet and try the restore again\n+ // once the primary is assigned to a node.\n+ // if the primary is initializing we assume that the restore is in progress.\n+ } else {\n+ // for example if an empty primary was forced (see AllocateEmptyPrimaryAllocationCommand)\n+ shardsBuilder.put(shardId, new ShardRestoreStatus(null, RestoreInProgress.State.FAILURE,\n+ \"recovery source type changed from snapshot to \" + primaryShard.recoverySource()));\n+ }\n+ }\n+ }\n }\n }\n+\n+ ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = shardsBuilder.build();\n+ RestoreInProgress.State newState = overallState(RestoreInProgress.State.STARTED, shards);\n+ entries.add(new RestoreInProgress.Entry(entry.snapshot(), newState, entry.indices(), shards));\n+ }\n+ RestoreInProgress updatedRestoreInProgress = new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n+ ImmutableOpenMap.Builder<String, ClusterState.Custom> builder = ImmutableOpenMap.builder(currentState.getCustoms());\n+ builder.put(RestoreInProgress.TYPE, updatedRestoreInProgress);\n+ ImmutableOpenMap<String, ClusterState.Custom> customs = builder.build();\n+ return ClusterState.builder(currentState).customs(customs).build();\n+ }\n+ return currentState;\n+ }\n+\n+ public static RestoreInProgress.State overallState(RestoreInProgress.State nonCompletedState,\n+ ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n+ boolean hasFailed = false;\n+ for (ObjectCursor<RestoreInProgress.ShardRestoreStatus> status : shards.values()) {\n+ if (!status.value.state().completed()) {\n+ return nonCompletedState;\n }\n- });\n+ if (status.value.state() == RestoreInProgress.State.FAILURE) {\n+ hasFailed = true;\n+ }\n+ }\n+ if (hasFailed) {\n+ return RestoreInProgress.State.FAILURE;\n+ } else {\n+ return RestoreInProgress.State.SUCCESS;\n+ }\n }\n \n- private boolean completed(ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n+ public static boolean completed(ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n for (ObjectCursor<RestoreInProgress.ShardRestoreStatus> status : shards.values()) {\n if (!status.value.state().completed()) {\n return false;\n@@ -683,7 +803,7 @@ private boolean completed(ImmutableOpenMap<ShardId, RestoreInProgress.ShardResto\n return true;\n }\n \n- private int failedShards(ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n+ public static int failedShards(ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n int failedShards = 0;\n for (ObjectCursor<RestoreInProgress.ShardRestoreStatus> status : shards.values()) {\n if (status.value.state() == RestoreInProgress.State.FAILURE) {\n@@ -728,38 +848,20 @@ private void validateSnapshotRestorable(final String repository, final SnapshotI\n }\n \n /**\n- * Checks if any of the deleted indices are still recovering and fails recovery on the shards of these indices\n+ * This method is used by {@link IndexShard} to notify\n+ * {@code RestoreService} about shard restore completion.\n *\n- * @param event cluster changed event\n+ * @param snapshot snapshot\n+ * @param shardId shard id\n */\n- private void processDeletedIndices(ClusterChangedEvent event) {\n- RestoreInProgress restore = event.state().custom(RestoreInProgress.TYPE);\n- if (restore == null) {\n- // Not restoring - nothing to do\n- return;\n- }\n-\n- if (!event.indicesDeleted().isEmpty()) {\n- // Some indices were deleted, let's make sure all indices that we are restoring still exist\n- for (RestoreInProgress.Entry entry : restore.entries()) {\n- List<ShardId> shardsToFail = null;\n- for (ObjectObjectCursor<ShardId, ShardRestoreStatus> shard : entry.shards()) {\n- if (!shard.value.state().completed()) {\n- if (!event.state().metaData().hasIndex(shard.key.getIndex().getName())) {\n- if (shardsToFail == null) {\n- shardsToFail = new ArrayList<>();\n- }\n- shardsToFail.add(shard.key);\n- }\n- }\n- }\n- if (shardsToFail != null) {\n- for (ShardId shardId : shardsToFail) {\n- logger.trace(\"[{}] failing running shard restore [{}]\", entry.snapshot(), shardId);\n- updateRestoreStateOnMaster(new UpdateIndexShardRestoreStatusRequest(entry.snapshot(), shardId, new ShardRestoreStatus(null, RestoreInProgress.State.FAILURE, \"index was deleted\")));\n- }\n- }\n- }\n+ public void indexShardRestoreCompleted(Snapshot snapshot, ShardId shardId) {\n+ logger.trace(\"[{}] successfully restored shard [{}]\", snapshot, shardId);\n+ DiscoveryNode masterNode = clusterService.state().nodes().getMasterNode();\n+ if (masterNode != null && masterNode.getVersion().before(Version.V_5_1_0)) {\n+ // just here for backward compatibility with versions before 5.1.0\n+ UpdateIndexShardRestoreStatusRequest request = new UpdateIndexShardRestoreStatusRequest(snapshot, shardId,\n+ new ShardRestoreStatus(clusterService.state().nodes().getLocalNodeId(), RestoreInProgress.State.SUCCESS));\n+ transportService.sendRequest(masterNode, UPDATE_RESTORE_ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);\n }\n }\n \n@@ -768,10 +870,13 @@ private void processDeletedIndices(ClusterChangedEvent event) {\n */\n public void failRestore(Snapshot snapshot, ShardId shardId) {\n logger.debug(\"[{}] failed to restore shard [{}]\", snapshot, shardId);\n- UpdateIndexShardRestoreStatusRequest request = new UpdateIndexShardRestoreStatusRequest(snapshot, shardId,\n+ DiscoveryNode masterNode = clusterService.state().nodes().getMasterNode();\n+ if (masterNode != null && masterNode.getVersion().before(Version.V_5_1_0)) {\n+ // just here for backward compatibility with versions before 5.1.0\n+ UpdateIndexShardRestoreStatusRequest request = new UpdateIndexShardRestoreStatusRequest(snapshot, shardId,\n new ShardRestoreStatus(clusterService.state().nodes().getLocalNodeId(), RestoreInProgress.State.FAILURE));\n- transportService.sendRequest(clusterService.state().nodes().getMasterNode(),\n- UPDATE_RESTORE_ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);\n+ transportService.sendRequest(masterNode, UPDATE_RESTORE_ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);\n+ }\n }\n \n private boolean failed(SnapshotInfo snapshot, String index) {\n@@ -810,34 +915,11 @@ public static void checkIndexClosing(ClusterState currentState, Set<IndexMetaDat\n }\n }\n \n- /**\n- * Adds restore completion listener\n- * <p>\n- * This listener is called for each snapshot that finishes restore operation in the cluster. It's responsibility of\n- * the listener to decide if it's called for the appropriate snapshot or not.\n- *\n- * @param listener restore completion listener\n- */\n- public void addListener(ActionListener<RestoreCompletionResponse> listener) {\n- this.listeners.add(listener);\n- }\n-\n- /**\n- * Removes restore completion listener\n- * <p>\n- * This listener is called for each snapshot that finishes restore operation in the cluster.\n- *\n- * @param listener restore completion listener\n- */\n- public void removeListener(ActionListener<RestoreCompletionResponse> listener) {\n- this.listeners.remove(listener);\n- }\n-\n @Override\n public void clusterChanged(ClusterChangedEvent event) {\n try {\n if (event.localNodeMaster()) {\n- processDeletedIndices(event);\n+ cleanupRestoreState(event);\n }\n } catch (Exception t) {\n logger.warn(\"Failed to update restore state \", t);\n@@ -1122,7 +1204,8 @@ public String toString() {\n class UpdateRestoreStateRequestHandler implements TransportRequestHandler<UpdateIndexShardRestoreStatusRequest> {\n @Override\n public void messageReceived(UpdateIndexShardRestoreStatusRequest request, final TransportChannel channel) throws Exception {\n- updateRestoreStateOnMaster(request);\n+ // just here for backward compatibility, no need to do anything, there is already a parallel shard started / failed request\n+ // that contains all relevant information needed.\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/snapshots/RestoreService.java", "status": "modified" }, { "diff": "@@ -33,7 +33,7 @@\n import static java.util.Collections.singletonList;\n import static org.hamcrest.Matchers.contains;\n import static org.mockito.Matchers.any;\n-import static org.mockito.Matchers.anyCollectionOf;\n+import static org.mockito.Matchers.anySetOf;\n import static org.mockito.Mockito.mock;\n import static org.mockito.Mockito.when;\n \n@@ -45,7 +45,7 @@ public class MetaDataIndexAliasesServiceTests extends ESTestCase {\n \n public MetaDataIndexAliasesServiceTests() {\n // Mock any deletes so we don't need to worry about how MetaDataDeleteIndexService does its job\n- when(deleteIndexService.deleteIndices(any(ClusterState.class), anyCollectionOf(Index.class))).then(i -> {\n+ when(deleteIndexService.deleteIndices(any(ClusterState.class), anySetOf(Index.class))).then(i -> {\n ClusterState state = (ClusterState) i.getArguments()[0];\n @SuppressWarnings(\"unchecked\")\n Collection<Index> indices = (Collection<Index>) i.getArguments()[1];", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesServiceTests.java", "status": "modified" }, { "diff": "@@ -689,6 +689,47 @@ public void testDataFileFailureDuringRestore() throws Exception {\n logger.info(\"--> total number of simulated failures during restore: [{}]\", getFailureCount(\"test-repo\"));\n }\n \n+ public void testDataFileCorruptionDuringRestore() throws Exception {\n+ Path repositoryLocation = randomRepoPath();\n+ Client client = client();\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(Settings.builder().put(\"location\", repositoryLocation)));\n+\n+ prepareCreate(\"test-idx\").setSettings(Settings.builder().put(\"index.allocation.max_retries\", Integer.MAX_VALUE)).get();\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data\");\n+ for (int i = 0; i < 100; i++) {\n+ index(\"test-idx\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ }\n+ refresh();\n+ assertThat(client.prepareSearch(\"test-idx\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n+\n+ logger.info(\"--> snapshot\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).setIndices(\"test-idx\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().state(), equalTo(SnapshotState.SUCCESS));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().totalShards(), equalTo(createSnapshotResponse.getSnapshotInfo().successfulShards()));\n+\n+ logger.info(\"--> update repository with mock version\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"mock\").setSettings(\n+ Settings.builder()\n+ .put(\"location\", repositoryLocation)\n+ .put(\"random\", randomAsciiOfLength(10))\n+ .put(\"use_lucene_corruption\", true)\n+ .put(\"max_failure_number\", Long.MAX_VALUE)\n+ .put(\"random_data_file_io_exception_rate\", 1.0)));\n+\n+ // Test restore after index deletion\n+ logger.info(\"--> delete index\");\n+ cluster().wipeIndices(\"test-idx\");\n+ logger.info(\"--> restore corrupt index\");\n+ RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).execute().actionGet();\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().failedShards(), equalTo(restoreSnapshotResponse.getRestoreInfo().totalShards()));\n+ }\n+\n public void testDeletionOfFailingToRecoverIndexShouldStopRestore() throws Exception {\n Path repositoryLocation = randomRepoPath();\n Client client = client();\n@@ -2202,32 +2243,6 @@ public void testBatchingShardUpdateTask() throws Exception {\n assertFalse(snapshotListener.timedOut());\n // Check that cluster state update task was called only once\n assertEquals(1, snapshotListener.count());\n-\n- logger.info(\"--> close indices\");\n- client.admin().indices().prepareClose(\"test-idx\").get();\n-\n- BlockingClusterStateListener restoreListener = new BlockingClusterStateListener(clusterService, \"restore_snapshot[\", \"update snapshot state\", Priority.HIGH);\n-\n- try {\n- clusterService.addFirst(restoreListener);\n- logger.info(\"--> restore snapshot\");\n- ListenableActionFuture<RestoreSnapshotResponse> futureRestore = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).execute();\n-\n- // Await until shard updates are in pending state.\n- assertBusyPendingTasks(\"update snapshot state\", numberOfShards);\n- restoreListener.unblock();\n-\n- RestoreSnapshotResponse restoreSnapshotResponse = futureRestore.actionGet();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), equalTo(numberOfShards));\n-\n- } finally {\n- clusterService.remove(restoreListener);\n- }\n-\n- // Check that we didn't timeout\n- assertFalse(restoreListener.timedOut());\n- // Check that cluster state update task was called only once\n- assertEquals(1, restoreListener.count());\n }\n \n public void testSnapshotName() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import java.util.concurrent.ConcurrentMap;\n import java.util.concurrent.atomic.AtomicLong;\n \n+import org.apache.lucene.index.CorruptIndexException;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.metadata.RepositoryMetaData;\n@@ -81,6 +82,8 @@ public long getFailureCount() {\n \n private final double randomDataFileIOExceptionRate;\n \n+ private final boolean useLuceneCorruptionException;\n+\n private final long maximumNumberOfFailures;\n \n private final long waitAfterUnblock;\n@@ -101,6 +104,7 @@ public MockRepository(RepositoryMetaData metadata, Environment environment) thro\n super(overrideSettings(metadata, environment), environment);\n randomControlIOExceptionRate = metadata.settings().getAsDouble(\"random_control_io_exception_rate\", 0.0);\n randomDataFileIOExceptionRate = metadata.settings().getAsDouble(\"random_data_file_io_exception_rate\", 0.0);\n+ useLuceneCorruptionException = metadata.settings().getAsBoolean(\"use_lucene_corruption\", false);\n maximumNumberOfFailures = metadata.settings().getAsLong(\"max_failure_number\", 100L);\n blockOnControlFiles = metadata.settings().getAsBoolean(\"block_on_control\", false);\n blockOnDataFiles = metadata.settings().getAsBoolean(\"block_on_data\", false);\n@@ -245,7 +249,11 @@ private void maybeIOExceptionOrBlock(String blobName) throws IOException {\n if (blobName.startsWith(\"__\")) {\n if (shouldFail(blobName, randomDataFileIOExceptionRate) && (incrementAndGetFailureCount() < maximumNumberOfFailures)) {\n logger.info(\"throwing random IOException for file [{}] at path [{}]\", blobName, path());\n- throw new IOException(\"Random IOException\");\n+ if (useLuceneCorruptionException) {\n+ throw new CorruptIndexException(\"Random corruption\", \"random file\");\n+ } else {\n+ throw new IOException(\"Random IOException\");\n+ }\n } else if (blockOnDataFiles) {\n logger.info(\"blocking I/O operation for file [{}] at path [{}]\", blobName, path());\n if (blockExecution() && waitAfterUnblock > 0) {", "filename": "core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java", "status": "modified" } ] }
{ "body": "Before this commit `curl -XHEAD localhost:9200?pretty` would return\n`Content-Length: 1` and a body which is fairly upsetting to standards\ncompliant tools. Now it'll return `Content-Length: 0` with an empty\nbody like every other `HEAD` request.\n\nRelates to #21075\n", "comments": [ { "body": "Thanks for reviewing @jasontedor ! Merged to:\n\nmaster: 8cc22eb960d53905cab3697f4de88c8cd8278601\n5.x: 2a18d2b6318011653ef7f467465b6014291da87b\n5.0: 5f7ce3cfd88710e09b8e2f6a1f236b50e5abe322\n", "created_at": "2016-10-21T20:48:54Z" }, { "body": "Thank you so much @jasontedor and @nik9000, amazing turn around time!\n", "created_at": "2016-10-22T11:47:33Z" }, { "body": "> LGTM. Can you also open a blocker issue for 6.0.0 to address the content-length header?\n\nThis was partially addressed in #21123, at least the correct content-length can be passed through now. We still have some handlers that handle head requests on their own and that needs to be removed.\n", "created_at": "2016-10-26T04:35:01Z" } ], "number": 21077, "title": "Make sure HEAD / has 0 Content-Length" }
{ "body": "This commit fixes responses to HEAD requests so that the value of the\nContent-Length is correct per the HTTP spec. Namely, the value of this\nheader should be equal to the Content-Length if the request were not a\nHEAD request.\n\nThis commit also fixes a memory leak on HEAD requests to the main action\nthat arose from the bytes on a builder not being released due to them\nbeing dropped on the floor to ensure that the response to the main\naction did not have a body.\n\nRelates #21077\n", "number": 21123, "review_comments": [ { "body": "These aren't so much right as what we do now.\n", "created_at": "2016-10-26T01:52:52Z" }, { "body": "This comment is out of date now.\n", "created_at": "2016-10-26T01:52:59Z" }, { "body": "The `0` in this is wrong now.\n", "created_at": "2016-10-26T01:53:09Z" }, { "body": "Right, and we aren't fixing that right now.\n", "created_at": "2016-10-26T02:01:48Z" }, { "body": "extra `+` in the message... (\"+ Content-Length\" -> \" Content-Length\")\n", "created_at": "2016-10-26T02:26:04Z" } ], "title": "Add correct Content-Length on HEAD requests" }
{ "commits": [ { "message": "Add correct Content-Length on HEAD requests\n\nThis commit fixes responses to HEAD requests so that the value of the\nContent-Length is correct per the HTTP spec. Namely, the value of this\nheader should be equal to the Content-Length if the request were not a\nHEAD request.\n\nThis commit also fixes a memory leak on HEAD requests to the main action\nthat arose from the bytes on a builder not being released due to them\nbeing dropped on the floor to ensure that the response to the main\naction did not have a body." }, { "message": "Fix comment and string in head test case assertion\n\nThis commit fixes some stale comments and strings in the head test case\nassertion." }, { "message": "Allow HEAD body is empty to run against Netty 3/4\n\nThis commit allows the HEAD body is empty test class to run against\nNetty 3 and Netty 4." }, { "message": "Merge branch 'master' into head-content-length\n\n* master:\n Makes search action cancelable by task management API" }, { "message": "Fix failing REST main action test\n\nWe no longer handle head requests in the main action, but instead on the\nREST layer." }, { "message": "Avoid NPEs in Netty HTTP channels" } ], "files": [ { "diff": "@@ -60,9 +60,6 @@ public RestResponse buildResponse(MainResponse mainResponse, XContentBuilder bui\n \n static BytesRestResponse convertMainResponse(MainResponse response, RestRequest request, XContentBuilder builder) throws IOException {\n RestStatus status = response.isAvailable() ? RestStatus.OK : RestStatus.SERVICE_UNAVAILABLE;\n- if (request.method() == RestRequest.Method.HEAD) {\n- return new BytesRestResponse(status, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n- }\n \n // Default to pretty printing, but allow ?pretty=false to disable\n if (request.hasParam(\"pretty\") == false) {", "filename": "core/src/main/java/org/elasticsearch/rest/action/RestMainAction.java", "status": "modified" }, { "diff": "@@ -61,9 +61,9 @@ public Method method() {\n BytesRestResponse response = RestMainAction.convertMainResponse(mainResponse, restRequest, builder);\n assertNotNull(response);\n assertEquals(expectedStatus, response.status());\n- assertEquals(0, response.content().length());\n \n- assertEquals(0, builder.bytes().length());\n+ // the empty responses are handled in the HTTP layer so we do\n+ // not assert on them here\n }\n \n public void testGetResponse() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/rest/action/RestMainActionTests.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.rest.RestResponse;\n import org.elasticsearch.rest.RestStatus;\n import org.jboss.netty.buffer.ChannelBuffer;\n+import org.jboss.netty.buffer.ChannelBuffers;\n import org.jboss.netty.channel.Channel;\n import org.jboss.netty.channel.ChannelFuture;\n import org.jboss.netty.channel.ChannelFutureListener;\n@@ -41,6 +42,7 @@\n import org.jboss.netty.handler.codec.http.CookieEncoder;\n import org.jboss.netty.handler.codec.http.DefaultHttpResponse;\n import org.jboss.netty.handler.codec.http.HttpHeaders;\n+import org.jboss.netty.handler.codec.http.HttpMethod;\n import org.jboss.netty.handler.codec.http.HttpResponse;\n import org.jboss.netty.handler.codec.http.HttpResponseStatus;\n import org.jboss.netty.handler.codec.http.HttpVersion;\n@@ -109,7 +111,11 @@ public void sendResponse(RestResponse response) {\n boolean addedReleaseListener = false;\n try {\n buffer = Netty3Utils.toChannelBuffer(content);\n- resp.setContent(buffer);\n+ if (HttpMethod.HEAD.equals(nettyRequest.getMethod())) {\n+ resp.setContent(ChannelBuffers.EMPTY_BUFFER);\n+ } else {\n+ resp.setContent(buffer);\n+ }\n \n // If our response doesn't specify a content-type header, set one\n setHeaderField(resp, HttpHeaders.Names.CONTENT_TYPE, response.contentType(), false);", "filename": "modules/transport-netty3/src/main/java/org/elasticsearch/http/netty3/Netty3HttpChannel.java", "status": "modified" }, { "diff": "@@ -0,0 +1,23 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.rest;\n+\n+public class Netty3HeadBodyIsEmptyIT extends HeadBodyIsEmptyIntegTestCase {\n+}", "filename": "modules/transport-netty3/src/test/java/org/elasticsearch/rest/Netty3HeadBodyIsEmptyIT.java", "status": "added" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.http.netty4;\n \n import io.netty.buffer.ByteBuf;\n+import io.netty.buffer.Unpooled;\n import io.netty.channel.Channel;\n import io.netty.channel.ChannelFutureListener;\n import io.netty.channel.ChannelPromise;\n@@ -29,6 +30,7 @@\n import io.netty.handler.codec.http.HttpHeaderNames;\n import io.netty.handler.codec.http.HttpHeaderValues;\n import io.netty.handler.codec.http.HttpHeaders;\n+import io.netty.handler.codec.http.HttpMethod;\n import io.netty.handler.codec.http.HttpResponse;\n import io.netty.handler.codec.http.HttpResponseStatus;\n import io.netty.handler.codec.http.HttpVersion;\n@@ -87,13 +89,17 @@ public BytesStreamOutput newBytesOutput() {\n return new ReleasableBytesStreamOutput(transport.bigArrays);\n }\n \n-\n @Override\n public void sendResponse(RestResponse response) {\n // if the response object was created upstream, then use it;\n // otherwise, create a new one\n ByteBuf buffer = Netty4Utils.toByteBuf(response.content());\n- FullHttpResponse resp = newResponse(buffer);\n+ final FullHttpResponse resp;\n+ if (HttpMethod.HEAD.equals(nettyRequest.method())) {\n+ resp = newResponse(Unpooled.EMPTY_BUFFER);\n+ } else {\n+ resp = newResponse(buffer);\n+ }\n resp.setStatus(getStatus(response.status()));\n \n Netty4CorsHandler.setCorsResponseHeaders(nettyRequest, resp, transport.getCorsConfig());", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpChannel.java", "status": "modified" }, { "diff": "@@ -0,0 +1,23 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.rest;\n+\n+public class Netty4HeadBodyIsEmptyIT extends HeadBodyIsEmptyIntegTestCase {\n+}", "filename": "modules/transport-netty4/src/test/java/org/elasticsearch/rest/Netty4HeadBodyIsEmptyIT.java", "status": "added" } ] }
{ "body": "Versions before 2.0 needed to be told to return interesting fields\nlike `_parent`, `_routing`, and `_ttl`. And they come\nback inside a `fields` block which we need to parse.\n\nCloses #21044\n", "comments": [ { "body": "This is also a bug in 5.0.0 but I don't think it is going to make it....\n", "created_at": "2016-10-21T15:49:28Z" }, { "body": "@dakrone, would you mind having a look? I think you reviewed the reindex-from-remote work before....\n", "created_at": "2016-10-21T15:49:51Z" }, { "body": "I've been kicking the can with regard to _ttl and _timestamp for a while now. I'll keep doing _something_ with this so long as we keep supporting them. And it is probably always going to be a good idea to parse them anyway so people can use reindex with a script to migrate from _timestamp to something else.\n", "created_at": "2016-10-21T17:12:53Z" }, { "body": "Thanks again @dakrone!\n\nmaster: 18393a06f373a2132972d5551bc9ed3ae28022e6\n5.x: 2abdd90118c0e5a68f870215d6494dab1a06b433\n5.0: <coming> (later)\n", "created_at": "2016-10-21T18:16:00Z" }, { "body": "I just backported this to the 5.0 branch so it'll go out with 5.0.1.\n", "created_at": "2016-10-28T17:23:19Z" } ], "number": 21070, "title": "Fix reindex-from-remote for parent/child from <2.0" }
{ "body": "Reindex-from-remote in 5.0.0 will fail to reindex parent-child docs\nfrom a 1.7 cluster. This adds a warning to the 5.0.0 docs. See #21070 and #21044.\n", "number": 21120, "review_comments": [], "title": "[DOCS] warn of using 5.0.0 reindex-remote for 1.7 parent-child docs" }
{ "commits": [ { "message": "[DOCS] warn of using 5.0.0 reindex-remote for 1.7 parent-child docs\n\nReindex-from-remote in 5.0.0 will fail to reindex parent-child docs\nfrom a 1.7 cluster." } ], "files": [ { "diff": "@@ -418,6 +418,17 @@ you are likely to find. This should allow you to upgrade from any version of\n Elasticsearch to the current version by reindexing from a cluster of the old\n version.\n \n+[WARNING]\n+=============================================\n+\n+Reindex-from-remote in Elasticsearch 5.0.0 will not preserve `_parent`,\n+`_routing`, or `_ttl` fields from a 1.x cluster. Re-indexing documents\n+whose mappings require any of these fields, such as child documents, will\n+fail. To re-index documents requiring these fields, use Elasticsearch 5.0.1\n+or later.\n+\n+=============================================\n+\n To enable queries sent to older versions of Elasticsearch the `query` parameter\n is sent directly to the remote host without validation or modification.\n ", "filename": "docs/reference/docs/reindex.asciidoc", "status": "modified" }, { "diff": "@@ -81,6 +81,12 @@ the previous major version to be upgraded to the current major version. By\n moving directly from Elasticsearch 1.x to 5.x, you will have to solve any\n backwards compatibility issues yourself.\n \n+Reindex-from-remote in Elasticsearch 5.0.0 will not preserve `_parent`,\n+`_routing`, or `_ttl` fields from a 1.x cluster. Re-indexing documents\n+whose mappings require any of these fields, such as child documents, will\n+fail. To re-index documents requiring these fields, use Elasticsearch 5.0.1\n+or later.\n+\n =============================================\n \n You will need to set up a 5.x cluster alongside your existing 1.x cluster.", "filename": "docs/reference/setup/reindex_upgrade.asciidoc", "status": "modified" } ] }
{ "body": "Unable to replicate, but I do not have a Windows machine for testing.\n\nREPRODUCE WITH: gradle :core:integTest -Dtests.seed=4DA46F7D8DF8CF05 -Dtests.class=org.elasticsearch.search.aggregations.bucket.SignificantTermsSignificanceScoreIT -Dtests.method=\"testScriptScore\" -Des.logger.level=DEBUG -Dtests.security.manager=true -Dtests.nightly=false -Dtests.heap.size=1024m -Dtests.jvm.argline=\"-server -XX:+UseConcMarkSweepGC -XX:-UseCompressedOops -XX:+AggressiveOpts\" -Dtests.locale=sr-Latn-RS -Dtests.timezone=Asia/Chongqing\n\nBuild Failure: (http://build-us-00.elastic.co/job/es_core_master_window-2008/3699/testReport/junit/org.elasticsearch.search.aggregations.bucket/SignificantTermsSignificanceScoreIT/testScriptScore/)\n", "comments": [ { "body": "Ditto - wouldn't replicate on OSX here. Any chance of a quick Windows test @costin ?\n", "created_at": "2016-05-04T18:18:51Z" }, { "body": "This fails around 3/4 times a month in our CI. @jpountz pushed https://github.com/elastic/elasticsearch/commit/cad959b to ease debugging which fields are null.\n\nIt does not only fail on Windows. \n\nAs far as I can see the null field is not always the same: I've seen `_superset_size` being null, and also `_subset_freq`. Here is the latest failure: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+periodic/1601/console .\n\n@markharwood would you mind taking another look? maybe @colings86 would like to check as well?\n", "created_at": "2016-07-22T07:26:32Z" }, { "body": "@markharwood is this still a problem?\n", "created_at": "2016-10-18T07:49:15Z" }, { "body": "I've not seen any examples of failures in the available history:\nhttps://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+periodic/311/testReport/junit/org.elasticsearch.search.aggregations.bucket/SignificantTermsSignificanceScoreIT/history/?start=25 \n", "created_at": "2016-10-19T08:08:32Z" }, { "body": "This is still a problem, last failure was two days ago: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+g1gc/328/console . Still happens a few times a month.\n", "created_at": "2016-10-19T08:46:55Z" }, { "body": "OK, thanks. Will take a look\n", "created_at": "2016-10-19T08:58:31Z" }, { "body": "Could not reproduce with that seed over many iterations.\n\nThe debug code that Adrien added shows that the very first parameter it tries to retrieve (subset_freq) is null.\nThis is also the first of many non-null parameters set in ScriptHeuristic.initialize():\n\n```\npublic void initialize(ExecutableScript executableScript) {\n this.executableScript = executableScript;\n this.executableScript.setNextVar(\"_subset_freq\", subsetDfHolder);\n this.executableScript.setNextVar(\"_subset_size\", subsetSizeHolder);\n this.executableScript.setNextVar(\"_superset_freq\", supersetDfHolder);\n this.executableScript.setNextVar(\"_superset_size\", supersetSizeHolder);\n}\n```\n\nI notice elsewhere that there is a reference to the fact that the test framework can sometimes invoke script.run() before initialization is complete:\n\n```\n@Override\npublic double getScore(long subsetFreq, long subsetSize, long supersetFreq, long supersetSize) {\n if (executableScript == null) {\n //In tests, wehn calling assertSearchResponse(..) the response is streamed one additional time with an arbitrary version, see assertVersionSerializable(..).\n // Now, for version before 1.5.0 the score is computed after streaming the response but for scripts the script does not exists yet.\n // assertSearchResponse() might therefore fail although there is no problem.\n // This should be replaced by an exception in 2.0.\n ESLoggerFactory.getLogger(\"script heuristic\").warn(\"cannot compute score - script has not been initialized yet.\");\n return 0;\n }\n subsetSizeHolder.value = subsetSize;\n supersetSizeHolder.value = supersetSize;\n subsetDfHolder.value = subsetFreq;\n supersetDfHolder.value = supersetFreq;\n return ((Number) executableScript.run()).doubleValue();\n}\n```\n\nIf this is possible/acceptable then maybe we should reverse the order of the operations in the initialize method - set the script variables up _and only then_ set this.executableScript to the fully initialized object. Any failure to set the variables properly will result in a null script which when getScore() is called will log the appropriate error message \"script has not been initialized yet\". \n", "created_at": "2016-10-19T11:20:16Z" }, { "body": "I remember talking to @brwe about that warning when she created the script score but I don't remember exactly why we needed it. @brwe do you remember?\n", "created_at": "2016-10-19T12:11:35Z" }, { "body": "Here's another one: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+5.0+java9-periodic/367/console\n", "created_at": "2016-10-20T12:09:31Z" }, { "body": "Added what could be a fix in master: https://github.com/elastic/elasticsearch/commit/4a815bf6654b5e271942a62d3f2b0dc04c408de9\n\nWill push to other branches if we conclude that this makes the difference\n", "created_at": "2016-10-20T13:33:08Z" }, { "body": "@colings86 I do not remember all of it but believe this was an < 2.0 problem and we should really replace the log message for script = null with an assertion. \n@markharwood I do not think that changing the order is a fix here but also do not fully understand what is going on yet. I will dig.\n", "created_at": "2016-10-20T16:47:41Z" }, { "body": "I think the problem is that the ScriptHeuristic object is reused and hence accessed from different threads concurrently. How exactly this happens and if it is a problem of our tests only I cannot say yet because I am struggling to navigate aggregations. Will continue Monday. \n", "created_at": "2016-10-21T15:42:41Z" }, { "body": "If a coordinating node routes search requests to local shards they will not be streamed but the same aggregations builder object will be used for each individual shards that the request is executed on. When the aggregation is build significant terms heuristic is reused and hence several shards can end up with the same ScriptHeuristic object and also modify it concurrently. Instead the builders should make a copy and pass these to the aggregators. Hope I got the nomenclature right. I can work on a fix. \n", "created_at": "2016-10-24T12:40:16Z" }, { "body": "@brwe that makes sense, we have to do something similar for the Scripted Metric Aggregation\n", "created_at": "2016-10-24T13:08:29Z" }, { "body": "One correction to what I wrote above: The script must be be reused either because we set variables in it. We need one instance per shard.\n", "created_at": "2016-10-24T13:35:08Z" } ], "number": 18120, "title": "Build Failure: org.elasticsearch.search.aggregations.bucket.SignificantTermsSignificanceScoreIT.testScriptScore" }
{ "body": "The ScriptedHeuristic objects used in significant terms aggs were not thread safe when running local to the coordinating node. The new code spawns an object for each shard search execution rather than sharing a common ScriptedHeuristic instance which is not thread safe.\n\nCloses #18120\n", "number": 21113, "review_comments": [ { "body": "nit: space before {\n", "created_at": "2016-10-25T16:34:05Z" } ], "title": "Thread safety for scripted significance heuristics" }
{ "commits": [ { "message": "Aggregations fix: scripted heuristics for scoring significant_terms aggs were not thread safe when running local to the coordinating node. New code spawns an object for each shard search execution rather than sharing a common instance which is not thread safe.\nCloses #18120" }, { "message": "Added comments and docs as per review comments" }, { "message": "Javadoc fix" } ], "files": [ { "diff": "@@ -197,13 +197,13 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n }\n \n- getSignificanceHeuristic().initialize(reduceContext);\n+ SignificanceHeuristic heuristic = getSignificanceHeuristic().rewrite(reduceContext);\n final int size = Math.min(requiredSize, buckets.size());\n BucketSignificancePriorityQueue<B> ordered = new BucketSignificancePriorityQueue<>(size);\n for (Map.Entry<String, List<B>> entry : buckets.entrySet()) {\n List<B> sameTermBuckets = entry.getValue();\n final B b = sameTermBuckets.get(0).reduce(sameTermBuckets, reduceContext);\n- b.updateScore(getSignificanceHeuristic());\n+ b.updateScore(heuristic);\n if ((b.score > 0) && (b.subsetDf >= minDocCount)) {\n ordered.insertWithOverflow(b);\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalSignificantTerms.java", "status": "modified" }, { "diff": "@@ -217,9 +217,9 @@ public SignificanceHeuristic significanceHeuristic() {\n @Override\n protected ValuesSourceAggregatorFactory<ValuesSource, ?> innerBuild(AggregationContext context, ValuesSourceConfig<ValuesSource> config,\n AggregatorFactory<?> parent, Builder subFactoriesBuilder) throws IOException {\n- this.significanceHeuristic.initialize(context.searchContext());\n+ SignificanceHeuristic executionHeuristic = this.significanceHeuristic.rewrite(context.searchContext());\n return new SignificantTermsAggregatorFactory(name, type, config, includeExclude, executionHint, filterBuilder,\n- bucketCountThresholds, significanceHeuristic, context, parent, subFactoriesBuilder, metaData);\n+ bucketCountThresholds, executionHeuristic, context, parent, subFactoriesBuilder, metaData);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -24,7 +24,6 @@\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryShardException;\n@@ -43,18 +42,41 @@\n public class ScriptHeuristic extends SignificanceHeuristic {\n public static final String NAME = \"script_heuristic\";\n \n- private final LongAccessor subsetSizeHolder;\n- private final LongAccessor supersetSizeHolder;\n- private final LongAccessor subsetDfHolder;\n- private final LongAccessor supersetDfHolder;\n private final Script script;\n- ExecutableScript executableScript = null;\n+ \n+ // This class holds an executable form of the script with private variables ready for execution\n+ // on a single search thread.\n+ static class ExecutableScriptHeuristic extends ScriptHeuristic {\n+ private final LongAccessor subsetSizeHolder;\n+ private final LongAccessor supersetSizeHolder;\n+ private final LongAccessor subsetDfHolder;\n+ private final LongAccessor supersetDfHolder;\n+ private final ExecutableScript executableScript;\n+\n+ ExecutableScriptHeuristic(Script script, ExecutableScript executableScript){\n+ super(script);\n+ subsetSizeHolder = new LongAccessor();\n+ supersetSizeHolder = new LongAccessor();\n+ subsetDfHolder = new LongAccessor();\n+ supersetDfHolder = new LongAccessor();\n+ this.executableScript = executableScript;\n+ executableScript.setNextVar(\"_subset_freq\", subsetDfHolder);\n+ executableScript.setNextVar(\"_subset_size\", subsetSizeHolder);\n+ executableScript.setNextVar(\"_superset_freq\", supersetDfHolder);\n+ executableScript.setNextVar(\"_superset_size\", supersetSizeHolder);\n+ }\n+\n+ @Override\n+ public double getScore(long subsetFreq, long subsetSize, long supersetFreq, long supersetSize) {\n+ subsetSizeHolder.value = subsetSize;\n+ supersetSizeHolder.value = supersetSize;\n+ subsetDfHolder.value = subsetFreq;\n+ supersetDfHolder.value = supersetFreq;\n+ return ((Number) executableScript.run()).doubleValue(); \n+ }\n+ }\n \n public ScriptHeuristic(Script script) {\n- subsetSizeHolder = new LongAccessor();\n- supersetSizeHolder = new LongAccessor();\n- subsetDfHolder = new LongAccessor();\n- supersetDfHolder = new LongAccessor();\n this.script = script;\n }\n \n@@ -71,22 +93,15 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public void initialize(InternalAggregation.ReduceContext context) {\n- initialize(context.scriptService().executable(script, ScriptContext.Standard.AGGS, Collections.emptyMap()));\n+ public SignificanceHeuristic rewrite(InternalAggregation.ReduceContext context) {\n+ return new ExecutableScriptHeuristic(script, context.scriptService().executable(script, ScriptContext.Standard.AGGS, Collections.emptyMap()));\n }\n \n @Override\n- public void initialize(SearchContext context) {\n- initialize(context.getQueryShardContext().getExecutableScript(script, ScriptContext.Standard.AGGS, Collections.emptyMap()));\n+ public SignificanceHeuristic rewrite(SearchContext context) {\n+ return new ExecutableScriptHeuristic(script, context.getQueryShardContext().getExecutableScript(script, ScriptContext.Standard.AGGS, Collections.emptyMap()));\n }\n \n- public void initialize(ExecutableScript executableScript) {\n- executableScript.setNextVar(\"_subset_freq\", subsetDfHolder);\n- executableScript.setNextVar(\"_subset_size\", subsetSizeHolder);\n- executableScript.setNextVar(\"_superset_freq\", supersetDfHolder);\n- executableScript.setNextVar(\"_superset_size\", supersetSizeHolder);\n- this.executableScript = executableScript;\n- }\n \n /**\n * Calculates score with a script\n@@ -99,19 +114,7 @@ public void initialize(ExecutableScript executableScript) {\n */\n @Override\n public double getScore(long subsetFreq, long subsetSize, long supersetFreq, long supersetSize) {\n- if (executableScript == null) {\n- //In tests, wehn calling assertSearchResponse(..) the response is streamed one additional time with an arbitrary version, see assertVersionSerializable(..).\n- // Now, for version before 1.5.0 the score is computed after streaming the response but for scripts the script does not exists yet.\n- // assertSearchResponse() might therefore fail although there is no problem.\n- // This should be replaced by an exception in 2.0.\n- ESLoggerFactory.getLogger(\"script heuristic\").warn(\"cannot compute score - script has not been initialized yet.\");\n- return 0;\n- }\n- subsetSizeHolder.value = subsetSize;\n- supersetSizeHolder.value = supersetSize;\n- subsetDfHolder.value = subsetFreq;\n- supersetDfHolder.value = supersetFreq;\n- return ((Number) executableScript.run()).doubleValue();\n+ throw new UnsupportedOperationException(\"This scoring heuristic must have 'rewrite' called on it to provide a version ready for use\");\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/ScriptHeuristic.java", "status": "modified" }, { "diff": "@@ -50,11 +50,23 @@ protected void checkFrequencyValidity(long subsetFreq, long subsetSize, long sup\n }\n }\n \n- public void initialize(InternalAggregation.ReduceContext reduceContext) {\n-\n+ /**\n+ * Provides a hook for subclasses to provide a version of the heuristic\n+ * prepared for execution on data on the coordinating node.\n+ * @param reduceContext the reduce context on the coordinating node\n+ * @return a version of this heuristic suitable for execution\n+ */\n+ public SignificanceHeuristic rewrite(InternalAggregation.ReduceContext reduceContext) {\n+ return this;\n }\n \n- public void initialize(SearchContext context) {\n-\n+ /**\n+ * Provides a hook for subclasses to provide a version of the heuristic\n+ * prepared for execution on data on a shard. \n+ * @param context the search context on the data node\n+ * @return a version of this heuristic suitable for execution\n+ */\n+ public SignificanceHeuristic rewrite(SearchContext context) {\n+ return this;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristic.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**:\n2.1\n**Plugins installed**: []\ndelete-by-query\nelasticsearch-analysis-ik\nrepository-hdfs\n**JVM version**:\n8u60\n**OS version**:\nCentOS release 6.6 (Final)\n**Description of the problem including expected versus actual behavior**:\none of the data node keep throw below exception:\n[2016-10-12 11:34:04,769][WARN ][cluster.action.shard ] [XXXX] [indexName][2] received shard failed for [indexName][2], node[rckOYj-DT42QNoH9CCEBJQ], relocating [v2zayugFQnuMiGu-hS1vXg], [R], v[7091], s[INI\nTIALIZING], a[id=bkpcEq2qTXaPEKHl9tOunQ, rId=xeJJijQCRyaJPcSgQa7eGg], expected_shard_size[22462872851], indexUUID [sOKz0tW9Sw-u137Swoevsw], message [failed to create shard], failure [ElasticsearchException[failed to create shard]; nested: LockObtainF\nailedException[Can't lock shard [indexName][2], timed out after 5000ms]; ]\n[indexName][[indexName][2]] ElasticsearchException[failed to create shard]; nested: LockObtainFailedException[Can't lock shard [indexName][2], timed out after 5000ms];\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:389)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:650)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:550)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:179)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:494)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [indexName][2], timed out after 5000ms\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:565)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:493)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:307)\n ... 9 more\n**Steps to reproduce**:(not very presious, I haven't reproduced it yet)\n1. give cluster a lot presure and one node out of cluster\n2. then remove presure and after a while, the node come back and try to recover some shard, it keeps throw below exception\n", "comments": [ { "body": "this happens when same background process is still ongoing (or just failed to finish properly) and still holds the shard lock. An example of this is a recovery process which needs access to the shard folder to copy files. \n\nDo you have `index.shard.check_on_startup` set by any chance?\n", "created_at": "2016-10-12T11:03:06Z" }, { "body": "@bleskes no, we don't set that. should it better to set it?\n", "created_at": "2016-10-13T03:13:30Z" }, { "body": "> @bleskes no, we don't set that. should it better to set it?\n\nOh no - this is one of those things that I know that can take long in the background. Since 2.1 is quite old - do you have the chance to upgrade to 2.4 and try to reproduce?\n", "created_at": "2016-10-13T06:44:05Z" }, { "body": "I'll try to reproduce in 2.4 in test env.\n", "created_at": "2016-10-13T07:09:21Z" }, { "body": "We are experiencing the same problem with ES 2.4.1, java 8u101, Ubuntu 14.04.\n\nIt has happened two times, and each time was triggered by starting a full snapshot while one index was under heavy indexing load (3-4k docs/s). About 10 minutes after the beginning of the snapshot, some shards of this index begin to throw lots of LockObtainFailedException, and the situation finally gets back to normal about one hour later. Meanwhile, about 2-3k LockObtainFailedException have been thrown.\n\nI hope a solution will be found because currently our only option is to disable our daily snapshot while we are doing heavy indexations.\n", "created_at": "2016-10-17T09:39:24Z" }, { "body": "The likely reason why the `LockObtainFailedException` keeps occurring is if we have a scenario where the node holding the primary is under heavy load so is slow to respond and leaves the cluster, while a snapshot is taking place. The snapshot holds a lock on the primary shard on the over-loaded node. When the master node realizes that over-loaded node is not responding, it removes it from the cluster, promotes a replica copy of the shard to primary, and cancels the snapshot. When the over-loaded node rejoins the cluster, the master node assigns it to hold a replica copy of the shard. When the node attempts to initialize the shard and recover from the primary, it encounters a `LockObtainFailedException` because the canceled snapshot process still holds a lock on the shard. The shard lock isn't released until the snapshot actually completes. We are looking into an appropriate fix for this.\n", "created_at": "2016-10-18T21:53:46Z" }, { "body": "@abeyad thanks for u guys' hard work\n", "created_at": "2016-10-19T02:38:44Z" } ], "number": 20876, "title": "[bug report]LockObtainFailedException throws under presure" }
{ "body": "Previously, if a node left the cluster (for example, due to a long GC),\nduring a snapshot, the master node would mark the snapshot as failed, but\nthe node itself could continue snapshotting the data on its shards to the\nrepository. If the node rejoins the cluster, the master may assign it to\nhold the replica shard (where it held the primary before getting kicked off\nthe cluster). The initialization of the replica shard would repeatedly fail\nwith a ShardLockObtainFailedException until the snapshot thread finally\nfinishes and relinquishes the lock on the Store.\n\nThis commit resolves the situation by ensuring that when a shard is removed\nfrom a node (such as when a node rejoins the cluster and realizes it no longer\nholds the active shard copy), any snapshotting of the removed shards is aborted.\nIn the scenario above, when the node rejoins the cluster, it will see in the cluster \nstate that the node no longer holds the primary shard, so `IndicesClusterStateService`\nwill remove the shard, thereby causing any snapshots of that shard to be aborted.\n\nCloses #20876\n", "number": 21084, "review_comments": [ { "body": "maybe simpler to replace\n`for (DiscoveryNode node : this) { sb.append(node).append(','); }`\nby `sb.append(Strings.collectionToDelimitedString(this, \",\"));`\n", "created_at": "2016-10-24T10:36:45Z" }, { "body": "done\n", "created_at": "2016-10-24T13:03:26Z" }, { "body": "removeIndex might be good enough here.\n", "created_at": "2016-10-25T17:20:49Z" }, { "body": "assertBusy uses by default 10 seconds, no need to specify it here again\n", "created_at": "2016-10-25T17:21:47Z" }, { "body": "why not use a random client every time?\n", "created_at": "2016-10-25T17:22:51Z" }, { "body": "I think this will succeed even without the change in this PR? I'm not sure what is exactly tested here.\n", "created_at": "2016-10-25T17:26:04Z" }, { "body": "the description does not match what the test does.\n", "created_at": "2016-10-25T17:30:49Z" }, { "body": "why use this particular client?\n", "created_at": "2016-10-25T17:31:35Z" }, { "body": "Pick node with THE primary shard\n", "created_at": "2016-10-25T17:32:07Z" }, { "body": "fixed\n", "created_at": "2016-10-25T17:52:14Z" }, { "body": "done\n", "created_at": "2016-10-25T17:52:19Z" }, { "body": "I made a mistake here, this only ensures the snapshot cluster state update has reached master, so I changed it to use `internalCluster().clusterService(node).state()` instead, to ensure each node knows that the snapshot is in progress.\n", "created_at": "2016-10-25T17:59:44Z" }, { "body": "done\n", "created_at": "2016-10-25T18:00:19Z" }, { "body": "done\n", "created_at": "2016-10-25T18:00:31Z" }, { "body": "Without the change here, the snapshot forever stalls and the test times out, because the snapshot was never aborted. This asserts that we abort the snapshot, bringing the snapshotting to a successful conclusion.\n", "created_at": "2016-10-25T18:03:42Z" }, { "body": "no assertBusy needed with `waitForCompletion` above?\n", "created_at": "2016-10-26T07:27:24Z" }, { "body": "waitForCompletion returns SnapshotInfo\n", "created_at": "2016-10-26T07:28:17Z" } ], "title": "Abort snapshots on a node that leaves the cluster" }
{ "commits": [ { "message": "Abort snapshots on a node that leaves the cluster\n\nPreviously, if a node left the cluster (for example, due to a long GC),\nduring a snapshot, the master node would mark the snapshot as failed, but\nthe node itself could continue snapshotting the data on its shards to the\nrepository. If the node rejoins the cluster, the master may assign it to\nhold the replica shard (where it held the primary before getting kicked off\nthe cluster). The initialization of the replica shard would repeatedly fail\nwith a ShardLockObtainFailedException until the snapshot thread finally\nfinishes and relinquishes the lock on the Store.\n\nThis commit resolves the situation by ensuring that the shard snapshot is\naborted when the node responsible for that shard's snapshot leaves the cluster.\nWhen the node rejoins the cluster, it will see in the cluster state that\nthe snapshot for that shard is failed and abort the snapshot locally,\nallowing the shard data directory to be freed for allocation of a replica\nshard on the same node.\n\nCloses #20876" }, { "message": "fix DiscoveryNodes#toString()" }, { "message": "aborting shard snapshots now happens on beforeIndexShardClosed callback" }, { "message": "remove unused import" }, { "message": "more lightweight test focusing on shard removal guaranting snapshot\naborting, removed all the network disruption stuff" }, { "message": "remove extra newline introduced" }, { "message": "improve test" }, { "message": "improves logging in test" }, { "message": "Use SnapshotInfo from waitForCompletion" } ], "files": [ { "diff": "@@ -210,12 +210,9 @@ public static boolean completed(ObjectContainer<ShardSnapshotStatus> shards) {\n \n \n public static class ShardSnapshotStatus {\n- private State state;\n- private String nodeId;\n- private String reason;\n-\n- private ShardSnapshotStatus() {\n- }\n+ private final State state;\n+ private final String nodeId;\n+ private final String reason;\n \n public ShardSnapshotStatus(String nodeId) {\n this(nodeId, State.INIT);\n@@ -231,6 +228,12 @@ public ShardSnapshotStatus(String nodeId, State state, String reason) {\n this.reason = reason;\n }\n \n+ public ShardSnapshotStatus(StreamInput in) throws IOException {\n+ nodeId = in.readOptionalString();\n+ state = State.fromValue(in.readByte());\n+ reason = in.readOptionalString();\n+ }\n+\n public State state() {\n return state;\n }\n@@ -243,18 +246,6 @@ public String reason() {\n return reason;\n }\n \n- public static ShardSnapshotStatus readShardSnapshotStatus(StreamInput in) throws IOException {\n- ShardSnapshotStatus shardSnapshotStatus = new ShardSnapshotStatus();\n- shardSnapshotStatus.readFrom(in);\n- return shardSnapshotStatus;\n- }\n-\n- public void readFrom(StreamInput in) throws IOException {\n- nodeId = in.readOptionalString();\n- state = State.fromValue(in.readByte());\n- reason = in.readOptionalString();\n- }\n-\n public void writeTo(StreamOutput out) throws IOException {\n out.writeOptionalString(nodeId);\n out.writeByte(state.value);\n@@ -282,6 +273,11 @@ public int hashCode() {\n result = 31 * result + (reason != null ? reason.hashCode() : 0);\n return result;\n }\n+\n+ @Override\n+ public String toString() {\n+ return \"ShardSnapshotStatus[state=\" + state + \", nodeId=\" + nodeId + \", reason=\" + reason + \"]\";\n+ }\n }\n \n public enum State {", "filename": "core/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.cluster.AbstractDiffable;\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -399,9 +400,7 @@ public Delta delta(DiscoveryNodes other) {\n public String toString() {\n StringBuilder sb = new StringBuilder();\n sb.append(\"{\");\n- for (DiscoveryNode node : this) {\n- sb.append(node).append(',');\n- }\n+ sb.append(Strings.collectionToDelimitedString(this, \",\"));\n sb.append(\"}\");\n return sb.toString();\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodes.java", "status": "modified" }, { "diff": "@@ -411,7 +411,11 @@ private void closeShard(String reason, ShardId sId, IndexShard indexShard, Store\n }\n } finally {\n try {\n- store.close();\n+ if (store != null) {\n+ store.close();\n+ } else {\n+ logger.trace(\"[{}] store not initialized prior to closing shard, nothing to close\", shardId);\n+ }\n } catch (Exception e) {\n logger.warn(\n (Supplier<?>) () -> new ParameterizedMessage(", "filename": "core/src/main/java/org/elasticsearch/index/IndexService.java", "status": "modified" }, { "diff": "@@ -27,7 +27,7 @@ public class IndexShardSnapshotStatus {\n /**\n * Snapshot stage\n */\n- public static enum Stage {\n+ public enum Stage {\n /**\n * Snapshot hasn't started yet\n */\n@@ -66,7 +66,7 @@ public static enum Stage {\n \n private long indexVersion;\n \n- private boolean aborted;\n+ private volatile boolean aborted;\n \n private String failure;\n ", "filename": "core/src/main/java/org/elasticsearch/index/snapshots/IndexShardSnapshotStatus.java", "status": "modified" }, { "diff": "@@ -69,6 +69,7 @@\n import org.elasticsearch.repositories.RepositoriesService;\n import org.elasticsearch.search.SearchService;\n import org.elasticsearch.snapshots.RestoreService;\n+import org.elasticsearch.snapshots.SnapshotShardsService;\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.IOException;\n@@ -113,10 +114,11 @@ public IndicesClusterStateService(Settings settings, IndicesService indicesServi\n NodeMappingRefreshAction nodeMappingRefreshAction,\n RepositoriesService repositoriesService, RestoreService restoreService,\n SearchService searchService, SyncedFlushService syncedFlushService,\n- PeerRecoverySourceService peerRecoverySourceService) {\n+ PeerRecoverySourceService peerRecoverySourceService, SnapshotShardsService snapshotShardsService) {\n this(settings, (AllocatedIndices<? extends Shard, ? extends AllocatedIndex<? extends Shard>>) indicesService,\n clusterService, threadPool, recoveryTargetService, shardStateAction,\n- nodeMappingRefreshAction, repositoriesService, restoreService, searchService, syncedFlushService, peerRecoverySourceService);\n+ nodeMappingRefreshAction, repositoriesService, restoreService, searchService, syncedFlushService, peerRecoverySourceService,\n+ snapshotShardsService);\n }\n \n // for tests\n@@ -128,9 +130,10 @@ public IndicesClusterStateService(Settings settings, IndicesService indicesServi\n NodeMappingRefreshAction nodeMappingRefreshAction,\n RepositoriesService repositoriesService, RestoreService restoreService,\n SearchService searchService, SyncedFlushService syncedFlushService,\n- PeerRecoverySourceService peerRecoverySourceService) {\n+ PeerRecoverySourceService peerRecoverySourceService, SnapshotShardsService snapshotShardsService) {\n super(settings);\n- this.buildInIndexListener = Arrays.asList(peerRecoverySourceService, recoveryTargetService, searchService, syncedFlushService);\n+ this.buildInIndexListener = Arrays.asList(peerRecoverySourceService, recoveryTargetService, searchService, syncedFlushService,\n+ snapshotShardsService);\n this.indicesService = indicesService;\n this.clusterService = clusterService;\n this.threadPool = threadPool;", "filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java", "status": "modified" }, { "diff": "@@ -29,8 +29,10 @@\n import org.elasticsearch.cluster.ClusterStateListener;\n import org.elasticsearch.cluster.ClusterStateUpdateTask;\n import org.elasticsearch.cluster.SnapshotsInProgress;\n+import org.elasticsearch.cluster.SnapshotsInProgress.ShardSnapshotStatus;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n@@ -42,11 +44,13 @@\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.index.engine.SnapshotFailedEngineException;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException;\n import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus;\n+import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus.Stage;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.repositories.IndexId;\n import org.elasticsearch.repositories.Repository;\n@@ -80,7 +84,7 @@\n * This service runs on data and master nodes and controls currently snapshotted shards on these nodes. It is responsible for\n * starting and stopping shard level snapshots\n */\n-public class SnapshotShardsService extends AbstractLifecycleComponent implements ClusterStateListener {\n+public class SnapshotShardsService extends AbstractLifecycleComponent implements ClusterStateListener, IndexEventListener {\n \n public static final String UPDATE_SNAPSHOT_ACTION_NAME = \"internal:cluster/snapshot/update_snapshot\";\n \n@@ -156,12 +160,8 @@ public void clusterChanged(ClusterChangedEvent event) {\n SnapshotsInProgress prev = event.previousState().custom(SnapshotsInProgress.TYPE);\n SnapshotsInProgress curr = event.state().custom(SnapshotsInProgress.TYPE);\n \n- if (prev == null) {\n- if (curr != null) {\n- processIndexShardSnapshots(event);\n- }\n- } else if (prev.equals(curr) == false) {\n- processIndexShardSnapshots(event);\n+ if ((prev == null && curr != null) || (prev != null && prev.equals(curr) == false)) {\n+ processIndexShardSnapshots(event);\n }\n String masterNodeId = event.state().nodes().getMasterNodeId();\n if (masterNodeId != null && masterNodeId.equals(event.previousState().nodes().getMasterNodeId()) == false) {\n@@ -173,6 +173,18 @@ public void clusterChanged(ClusterChangedEvent event) {\n }\n }\n \n+ @Override\n+ public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard, Settings indexSettings) {\n+ // abort any snapshots occurring on the soon-to-be closed shard\n+ Map<Snapshot, SnapshotShards> snapshotShardsMap = shardSnapshots;\n+ for (Map.Entry<Snapshot, SnapshotShards> snapshotShards : snapshotShardsMap.entrySet()) {\n+ Map<ShardId, IndexShardSnapshotStatus> shards = snapshotShards.getValue().shards;\n+ if (shards.containsKey(shardId)) {\n+ logger.debug(\"[{}] shard closing, abort snapshotting for snapshot [{}]\", shardId, snapshotShards.getKey().getSnapshotId());\n+ shards.get(shardId).abort();\n+ }\n+ }\n+ }\n \n /**\n * Returns status of shards that are snapshotted on the node and belong to the given snapshot\n@@ -205,6 +217,16 @@ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n final Snapshot snapshot = entry.getKey();\n if (snapshotsInProgress != null && snapshotsInProgress.snapshot(snapshot) != null) {\n survivors.put(entry.getKey(), entry.getValue());\n+ } else {\n+ // abort any running snapshots of shards for the removed entry;\n+ // this could happen if for some reason the cluster state update for aborting\n+ // running shards is missed, then the snapshot is removed is a subsequent cluster\n+ // state update, which is being processed here\n+ for (IndexShardSnapshotStatus snapshotStatus : entry.getValue().shards.values()) {\n+ if (snapshotStatus.stage() == Stage.INIT || snapshotStatus.stage() == Stage.STARTED) {\n+ snapshotStatus.abort();\n+ }\n+ }\n }\n }\n \n@@ -221,7 +243,7 @@ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n if (entry.state() == SnapshotsInProgress.State.STARTED) {\n Map<ShardId, IndexShardSnapshotStatus> startedShards = new HashMap<>();\n SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshot());\n- for (ObjectObjectCursor<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shard : entry.shards()) {\n+ for (ObjectObjectCursor<ShardId, ShardSnapshotStatus> shard : entry.shards()) {\n // Add all new shards to start processing on\n if (localNodeId.equals(shard.value.nodeId())) {\n if (shard.value.state() == SnapshotsInProgress.State.INIT && (snapshotShards == null || !snapshotShards.shards.containsKey(shard.key))) {\n@@ -249,7 +271,7 @@ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n // Abort all running shards for this snapshot\n SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshot());\n if (snapshotShards != null) {\n- for (ObjectObjectCursor<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shard : entry.shards()) {\n+ for (ObjectObjectCursor<ShardId, ShardSnapshotStatus> shard : entry.shards()) {\n IndexShardSnapshotStatus snapshotStatus = snapshotShards.shards.get(shard.key);\n if (snapshotStatus != null) {\n switch (snapshotStatus.stage()) {\n@@ -263,12 +285,12 @@ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n case DONE:\n logger.debug(\"[{}] trying to cancel snapshot on the shard [{}] that is already done, updating status on the master\", entry.snapshot(), shard.key);\n updateIndexShardSnapshotStatus(entry.snapshot(), shard.key,\n- new SnapshotsInProgress.ShardSnapshotStatus(event.state().nodes().getLocalNodeId(), SnapshotsInProgress.State.SUCCESS));\n+ new ShardSnapshotStatus(event.state().nodes().getLocalNodeId(), SnapshotsInProgress.State.SUCCESS));\n break;\n case FAILURE:\n logger.debug(\"[{}] trying to cancel snapshot on the shard [{}] that has already failed, updating status on the master\", entry.snapshot(), shard.key);\n updateIndexShardSnapshotStatus(entry.snapshot(), shard.key,\n- new SnapshotsInProgress.ShardSnapshotStatus(event.state().nodes().getLocalNodeId(), SnapshotsInProgress.State.FAILED, snapshotStatus.failure()));\n+ new ShardSnapshotStatus(event.state().nodes().getLocalNodeId(), SnapshotsInProgress.State.FAILED, snapshotStatus.failure()));\n break;\n default:\n throw new IllegalStateException(\"Unknown snapshot shard stage \" + snapshotStatus.stage());\n@@ -309,18 +331,18 @@ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n @Override\n public void doRun() {\n snapshot(indexShard, entry.getKey(), indexId, shardEntry.getValue());\n- updateIndexShardSnapshotStatus(entry.getKey(), shardId, new SnapshotsInProgress.ShardSnapshotStatus(localNodeId, SnapshotsInProgress.State.SUCCESS));\n+ updateIndexShardSnapshotStatus(entry.getKey(), shardId, new ShardSnapshotStatus(localNodeId, SnapshotsInProgress.State.SUCCESS));\n }\n \n @Override\n public void onFailure(Exception e) {\n logger.warn((Supplier<?>) () -> new ParameterizedMessage(\"[{}] [{}] failed to create snapshot\", shardId, entry.getKey()), e);\n- updateIndexShardSnapshotStatus(entry.getKey(), shardId, new SnapshotsInProgress.ShardSnapshotStatus(localNodeId, SnapshotsInProgress.State.FAILED, ExceptionsHelper.detailedMessage(e)));\n+ updateIndexShardSnapshotStatus(entry.getKey(), shardId, new ShardSnapshotStatus(localNodeId, SnapshotsInProgress.State.FAILED, ExceptionsHelper.detailedMessage(e)));\n }\n \n });\n } catch (Exception e) {\n- updateIndexShardSnapshotStatus(entry.getKey(), shardId, new SnapshotsInProgress.ShardSnapshotStatus(localNodeId, SnapshotsInProgress.State.FAILED, ExceptionsHelper.detailedMessage(e)));\n+ updateIndexShardSnapshotStatus(entry.getKey(), shardId, new ShardSnapshotStatus(localNodeId, SnapshotsInProgress.State.FAILED, ExceptionsHelper.detailedMessage(e)));\n }\n }\n }\n@@ -383,23 +405,23 @@ private void syncShardStatsOnNewMaster(ClusterChangedEvent event) {\n if (snapshot.state() == SnapshotsInProgress.State.STARTED || snapshot.state() == SnapshotsInProgress.State.ABORTED) {\n Map<ShardId, IndexShardSnapshotStatus> localShards = currentSnapshotShards(snapshot.snapshot());\n if (localShards != null) {\n- ImmutableOpenMap<ShardId, SnapshotsInProgress.ShardSnapshotStatus> masterShards = snapshot.shards();\n+ ImmutableOpenMap<ShardId, ShardSnapshotStatus> masterShards = snapshot.shards();\n for(Map.Entry<ShardId, IndexShardSnapshotStatus> localShard : localShards.entrySet()) {\n ShardId shardId = localShard.getKey();\n IndexShardSnapshotStatus localShardStatus = localShard.getValue();\n- SnapshotsInProgress.ShardSnapshotStatus masterShard = masterShards.get(shardId);\n+ ShardSnapshotStatus masterShard = masterShards.get(shardId);\n if (masterShard != null && masterShard.state().completed() == false) {\n // Master knows about the shard and thinks it has not completed\n- if (localShardStatus.stage() == IndexShardSnapshotStatus.Stage.DONE) {\n+ if (localShardStatus.stage() == Stage.DONE) {\n // but we think the shard is done - we need to make new master know that the shard is done\n logger.debug(\"[{}] new master thinks the shard [{}] is not completed but the shard is done locally, updating status on the master\", snapshot.snapshot(), shardId);\n updateIndexShardSnapshotStatus(snapshot.snapshot(), shardId,\n- new SnapshotsInProgress.ShardSnapshotStatus(event.state().nodes().getLocalNodeId(), SnapshotsInProgress.State.SUCCESS));\n- } else if (localShard.getValue().stage() == IndexShardSnapshotStatus.Stage.FAILURE) {\n+ new ShardSnapshotStatus(event.state().nodes().getLocalNodeId(), SnapshotsInProgress.State.SUCCESS));\n+ } else if (localShard.getValue().stage() == Stage.FAILURE) {\n // but we think the shard failed - we need to make new master know that the shard failed\n logger.debug(\"[{}] new master thinks the shard [{}] is not completed but the shard failed locally, updating status on master\", snapshot.snapshot(), shardId);\n updateIndexShardSnapshotStatus(snapshot.snapshot(), shardId,\n- new SnapshotsInProgress.ShardSnapshotStatus(event.state().nodes().getLocalNodeId(), SnapshotsInProgress.State.FAILED, localShardStatus.failure()));\n+ new ShardSnapshotStatus(event.state().nodes().getLocalNodeId(), SnapshotsInProgress.State.FAILED, localShardStatus.failure()));\n \n }\n }\n@@ -427,15 +449,15 @@ private SnapshotShards(Map<ShardId, IndexShardSnapshotStatus> shards) {\n public static class UpdateIndexShardSnapshotStatusRequest extends TransportRequest {\n private Snapshot snapshot;\n private ShardId shardId;\n- private SnapshotsInProgress.ShardSnapshotStatus status;\n+ private ShardSnapshotStatus status;\n \n private volatile boolean processed; // state field, no need to serialize\n \n public UpdateIndexShardSnapshotStatusRequest() {\n \n }\n \n- public UpdateIndexShardSnapshotStatusRequest(Snapshot snapshot, ShardId shardId, SnapshotsInProgress.ShardSnapshotStatus status) {\n+ public UpdateIndexShardSnapshotStatusRequest(Snapshot snapshot, ShardId shardId, ShardSnapshotStatus status) {\n this.snapshot = snapshot;\n this.shardId = shardId;\n this.status = status;\n@@ -446,7 +468,7 @@ public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n snapshot = new Snapshot(in);\n shardId = ShardId.readShardId(in);\n- status = SnapshotsInProgress.ShardSnapshotStatus.readShardSnapshotStatus(in);\n+ status = new ShardSnapshotStatus(in);\n }\n \n @Override\n@@ -465,7 +487,7 @@ public ShardId shardId() {\n return shardId;\n }\n \n- public SnapshotsInProgress.ShardSnapshotStatus status() {\n+ public ShardSnapshotStatus status() {\n return status;\n }\n \n@@ -486,7 +508,7 @@ public boolean isProcessed() {\n /**\n * Updates the shard status\n */\n- public void updateIndexShardSnapshotStatus(Snapshot snapshot, ShardId shardId, SnapshotsInProgress.ShardSnapshotStatus status) {\n+ public void updateIndexShardSnapshotStatus(Snapshot snapshot, ShardId shardId, ShardSnapshotStatus status) {\n UpdateIndexShardSnapshotStatusRequest request = new UpdateIndexShardSnapshotStatusRequest(snapshot, shardId, status);\n try {\n if (clusterService.state().nodes().isLocalNodeElectedMaster()) {\n@@ -533,7 +555,7 @@ public ClusterState execute(ClusterState currentState) {\n int changedCount = 0;\n final List<SnapshotsInProgress.Entry> entries = new ArrayList<>();\n for (SnapshotsInProgress.Entry entry : snapshots.entries()) {\n- ImmutableOpenMap.Builder<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shards = ImmutableOpenMap.builder();\n+ ImmutableOpenMap.Builder<ShardId, ShardSnapshotStatus> shards = ImmutableOpenMap.builder();\n boolean updated = false;\n \n for (int i = 0; i < batchSize; i++) {", "filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotShardsService.java", "status": "modified" }, { "diff": "@@ -793,12 +793,12 @@ private boolean waitingShardsStartedOrUnassigned(ClusterChangedEvent event) {\n }\n \n private boolean removedNodesCleanupNeeded(ClusterChangedEvent event) {\n- // Check if we just became the master\n- boolean newMaster = !event.previousState().nodes().isLocalNodeElectedMaster();\n SnapshotsInProgress snapshotsInProgress = event.state().custom(SnapshotsInProgress.TYPE);\n if (snapshotsInProgress == null) {\n return false;\n }\n+ // Check if we just became the master\n+ boolean newMaster = !event.previousState().nodes().isLocalNodeElectedMaster();\n for (SnapshotsInProgress.Entry snapshot : snapshotsInProgress.entries()) {\n if (newMaster && (snapshot.state() == State.SUCCESS || snapshot.state() == State.INIT)) {\n // We just replaced old master and snapshots in intermediate states needs to be cleaned", "filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" }, { "diff": "@@ -376,7 +376,7 @@ private IndicesClusterStateService createIndicesClusterStateService(DiscoveryNod\n transportService, null, clusterService);\n final ShardStateAction shardStateAction = mock(ShardStateAction.class);\n return new IndicesClusterStateService(settings, indicesService, clusterService,\n- threadPool, recoveryTargetService, shardStateAction, null, repositoriesService, null, null, null, null);\n+ threadPool, recoveryTargetService, shardStateAction, null, repositoriesService, null, null, null, null, null);\n }\n \n private class RecordingIndicesService extends MockIndicesService {", "filename": "core/src/test/java/org/elasticsearch/indices/cluster/IndicesClusterStateServiceRandomUpdatesTests.java", "status": "modified" }, { "diff": "@@ -51,15 +51,18 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.MetaDataIndexStateService;\n+import org.elasticsearch.cluster.routing.IndexRoutingTable;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.store.IndexStore;\n+import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.InvalidIndexNameException;\n import org.elasticsearch.repositories.IndexId;\n import org.elasticsearch.repositories.RepositoriesService;\n@@ -2490,4 +2493,66 @@ public void testGetSnapshotsRequest() throws Exception {\n waitForCompletion(repositoryName, inProgressSnapshot, TimeValue.timeValueSeconds(60));\n }\n \n+ /**\n+ * This test ensures that when a shard is removed from a node (perhaps due to the node\n+ * leaving the cluster, then returning), all snapshotting of that shard is aborted, so\n+ * all Store references held onto by the snapshot are released.\n+ *\n+ * See https://github.com/elastic/elasticsearch/issues/20876\n+ */\n+ public void testSnapshotCanceledOnRemovedShard() throws Exception {\n+ final int numPrimaries = 1;\n+ final int numReplicas = 1;\n+ final int numDocs = 100;\n+ final String repo = \"test-repo\";\n+ final String index = \"test-idx\";\n+ final String snapshot = \"test-snap\";\n+\n+ assertAcked(prepareCreate(index, 1,\n+ Settings.builder().put(\"number_of_shards\", numPrimaries).put(\"number_of_replicas\", numReplicas)));\n+\n+ logger.info(\"--> indexing some data\");\n+ for (int i = 0; i < numDocs; i++) {\n+ index(index, \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ }\n+ refresh();\n+\n+ logger.info(\"--> creating repository\");\n+ PutRepositoryResponse putRepositoryResponse =\n+ client().admin().cluster().preparePutRepository(repo).setType(\"mock\").setSettings(Settings.builder()\n+ .put(\"location\", randomRepoPath())\n+ .put(\"random\", randomAsciiOfLength(10))\n+ .put(\"wait_after_unblock\", 200)\n+ ).get();\n+ assertTrue(putRepositoryResponse.isAcknowledged());\n+\n+ String blockedNode = blockNodeWithIndex(repo, index);\n+\n+ logger.info(\"--> snapshot\");\n+ client().admin().cluster().prepareCreateSnapshot(repo, snapshot)\n+ .setWaitForCompletion(false)\n+ .execute();\n+\n+ logger.info(\"--> waiting for block to kick in on node [{}]\", blockedNode);\n+ waitForBlock(blockedNode, repo, TimeValue.timeValueSeconds(10));\n+\n+ logger.info(\"--> removing primary shard that is being snapshotted\");\n+ ClusterState clusterState = internalCluster().clusterService(internalCluster().getMasterName()).state();\n+ IndexRoutingTable indexRoutingTable = clusterState.getRoutingTable().index(index);\n+ String nodeWithPrimary = clusterState.nodes().get(indexRoutingTable.shard(0).primaryShard().currentNodeId()).getName();\n+ assertNotNull(\"should be at least one node with a primary shard\", nodeWithPrimary);\n+ IndicesService indicesService = internalCluster().getInstance(IndicesService.class, nodeWithPrimary);\n+ IndexService indexService = indicesService.indexService(resolveIndex(index));\n+ indexService.removeShard(0, \"simulate node removal\");\n+\n+ logger.info(\"--> unblocking blocked node [{}]\", blockedNode);\n+ unblockNode(repo, blockedNode);\n+\n+ logger.info(\"--> ensuring snapshot is aborted and the aborted shard was marked as failed\");\n+ SnapshotInfo snapshotInfo = waitForCompletion(repo, snapshot, TimeValue.timeValueSeconds(10));\n+ assertEquals(1, snapshotInfo.shardFailures().size());\n+ assertEquals(0, snapshotInfo.shardFailures().get(0).shardId());\n+ assertEquals(\"IndexShardSnapshotFailedException[Aborted]\", snapshotInfo.shardFailures().get(0).reason());\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.0.0-rc1\n\nElasticsearch returns a response body when `pretty=true` is specified on a `HEAD` request which violates the [HTTP 1.1 specification](https://tools.ietf.org/html/rfc2616#section-9.4)\n\n```\nHEAD http://127.0.0.1:9200/?pretty=true \n```\n\n```\nHTTP/1.1 200 OK\ncontent-type: application/json; charset=UTF-8\ncontent-length: 1\n\n\n```\n\nThis causes a problem within the .NET client, which conforms strictly to the HTTP 1.1 spec.\n", "comments": [ { "body": "Additionally, the header is wrong. From RFC 2616:\n\n> `The Content-Length entity-header field indicates the size of the entity-body, in decimal number of OCTETs, sent to the recipient or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET.`\n", "created_at": "2016-10-21T20:01:38Z" }, { "body": "We have a short-term fix for REST main action that will address this issue so that there is not a body. We have a plan to address all HEAD methods as a future follow up (so that the content-length header is correct). There will be a PR from @nik9000 shortly.\n", "created_at": "2016-10-21T20:24:17Z" }, { "body": "I've opened the short term workaround (https://github.com/elastic/elasticsearch/pull/21077) and we'll repurpose this issue for the long term, fully rfc compliant fix.\n", "created_at": "2016-10-21T20:42:16Z" }, { "body": "We'll open a new issue for the content-length piece and relate it here.\n", "created_at": "2016-10-21T20:46:17Z" }, { "body": "Closed by #21077\n", "created_at": "2016-10-21T20:46:24Z" }, { "body": "> We'll open a new issue for the content-length piece and relate it here.\n\nPartially addressed in #21123.\n", "created_at": "2016-10-26T04:35:26Z" } ], "number": 21075, "title": "HEAD requests return a response body when pretty=true" }
{ "body": "Before this commit `curl -XHEAD localhost:9200?pretty` would return\n`Content-Length: 1` and a body which is fairly upsetting to standards\ncompliant tools. Now it'll return `Content-Length: 0` with an empty\nbody like every other `HEAD` request.\n\nRelates to #21075\n", "number": 21077, "review_comments": [], "title": "Make sure HEAD / has 0 Content-Length" }
{ "commits": [ { "message": "Make sure HEAD / has 0 Content-Length\n\nBefore this commit `curl -XHEAD localhost:9200?pretty` would return\n`Content-Length: 1` and a body which is fairly upsetting to standards\ncompliant tools. Now it'll return `Content-Length: 0` with an empty\nbody like every other `HEAD` request.\n\nRelates to #21075" } ], "files": [ { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.action.main.MainRequest;\n import org.elasticsearch.action.main.MainResponse;\n import org.elasticsearch.client.node.NodeClient;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -60,7 +61,7 @@ public RestResponse buildResponse(MainResponse mainResponse, XContentBuilder bui\n static BytesRestResponse convertMainResponse(MainResponse response, RestRequest request, XContentBuilder builder) throws IOException {\n RestStatus status = response.isAvailable() ? RestStatus.OK : RestStatus.SERVICE_UNAVAILABLE;\n if (request.method() == RestRequest.Method.HEAD) {\n- return new BytesRestResponse(status, builder);\n+ return new BytesRestResponse(status, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n }\n \n // Default to pretty printing, but allow ?pretty=false to disable", "filename": "core/src/main/java/org/elasticsearch/rest/action/RestMainAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,72 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.test.rest;\n+\n+import org.apache.http.entity.StringEntity;\n+import org.elasticsearch.client.Response;\n+\n+import java.io.IOException;\n+import java.io.UnsupportedEncodingException;\n+import java.util.Map;\n+\n+import static java.util.Collections.emptyMap;\n+import static java.util.Collections.singletonMap;\n+\n+/**\n+ * Tests that HTTP HEAD requests don't respond with a body.\n+ */\n+public class HeadBodyIsEmptyIT extends ESRestTestCase {\n+ public void testHeadRoot() throws IOException {\n+ headTestCase(\"/\", emptyMap());\n+ headTestCase(\"/\", singletonMap(\"pretty\", \"\"));\n+ headTestCase(\"/\", singletonMap(\"pretty\", \"true\"));\n+ }\n+\n+ private void createTestDoc() throws UnsupportedEncodingException, IOException {\n+ client().performRequest(\"PUT\", \"test/test/1\", emptyMap(), new StringEntity(\"{\\\"test\\\": \\\"test\\\"}\"));\n+ }\n+\n+ public void testDocumentExists() throws IOException {\n+ createTestDoc();\n+ headTestCase(\"test/test/1\", emptyMap());\n+ headTestCase(\"test/test/1\", singletonMap(\"pretty\", \"true\"));\n+ }\n+\n+ public void testIndexExists() throws IOException {\n+ createTestDoc();\n+ headTestCase(\"test\", emptyMap());\n+ headTestCase(\"test\", singletonMap(\"pretty\", \"true\"));\n+ }\n+\n+ public void testTypeExists() throws IOException {\n+ createTestDoc();\n+ headTestCase(\"test/test\", emptyMap());\n+ headTestCase(\"test/test\", singletonMap(\"pretty\", \"true\"));\n+ }\n+\n+ private void headTestCase(String url, Map<String, String> params) throws IOException {\n+ Response response = client().performRequest(\"HEAD\", url, params);\n+ assertEquals(200, response.getStatusLine().getStatusCode());\n+ /* Check that the content-length header is always 0. This isn't what we should be doing in the long run but it is what we expect\n+ * that we are *actually* doing. */\n+ assertEquals(\"We expect HEAD requests to have 0 Content-Length but \" + url + \" didn't\", \"0\", response.getHeader(\"Content-Length\"));\n+ assertNull(\"HEAD requests shouldn't have a response body but \" + url + \" did\", response.getEntity());\n+ }\n+}", "filename": "distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/HeadBodyIsEmptyIT.java", "status": "added" } ] }
{ "body": "It [looks like](https://discuss.elastic.co/t/upgrading-1-7-5-0-with-reindex-from-remote-and-parent-child-docs/63463) reindex-from-remote and parent/child don't always play well together.\n", "comments": [ { "body": "I reproduced the issue with reindex-from-remote from 1.7.4. 2.0.0 seemed fine. [gist](https://gist.github.com/nik9000/ac539270ee2ff1b85718c36005bccbcd)\n", "created_at": "2016-10-20T21:14:38Z" } ], "number": 21044, "title": "Reindex-from-remote fails to pick up parent from 1.7.4" }
{ "body": "Versions before 2.0 needed to be told to return interesting fields\nlike `_parent`, `_routing`, and `_ttl`. And they come\nback inside a `fields` block which we need to parse.\n\nCloses #21044\n", "number": 21070, "review_comments": [], "title": "Fix reindex-from-remote for parent/child from <2.0" }
{ "commits": [ { "message": "Fix reindex-from-remote for parent/child from <2.0\n\nVersions before 2.0 needed to be told to return interesting fields\nlike `_parent`, `_routing`, `_ttl`, and `_timestamp`. And they come\nback inside a `fields` block which we need to parse.\n\nCloses #21044" } ], "files": [ { "diff": "@@ -89,6 +89,10 @@ static Map<String, String> initialSearchParams(SearchRequest searchRequest, Vers\n params.put(\"sort\", sorts.toString());\n }\n }\n+ if (remoteVersion.before(Version.V_2_0_0)) {\n+ // Versions before 2.0.0 need prompting to return interesting fields. Note that timestamp isn't available at all....\n+ searchRequest.source().storedField(\"_parent\").storedField(\"_routing\").storedField(\"_ttl\");\n+ }\n if (searchRequest.source().storedFields() != null && false == searchRequest.source().storedFields().fieldNames().isEmpty()) {\n StringBuilder fields = new StringBuilder(searchRequest.source().storedFields().fieldNames().get(0));\n for (int i = 1; i < searchRequest.source().storedFields().fieldNames().size(); i++) {\n@@ -97,6 +101,8 @@ static Map<String, String> initialSearchParams(SearchRequest searchRequest, Vers\n String storedFieldsParamName = remoteVersion.before(Version.V_5_0_0_alpha4) ? \"fields\" : \"stored_fields\";\n params.put(storedFieldsParamName, fields.toString());\n }\n+ // We always want the _source document and this will force it to be returned.\n+ params.put(\"_source\", \"true\");\n return params;\n }\n ", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuilders.java", "status": "modified" }, { "diff": "@@ -83,10 +83,28 @@ private RemoteResponseParsers() {}\n throw new ParsingException(p.getTokenLocation(), \"[hit] failed to parse [_source]\", e);\n }\n }, new ParseField(\"_source\"));\n- HIT_PARSER.declareString(BasicHit::setRouting, new ParseField(\"_routing\"));\n- HIT_PARSER.declareString(BasicHit::setParent, new ParseField(\"_parent\"));\n- HIT_PARSER.declareLong(BasicHit::setTTL, new ParseField(\"_ttl\"));\n+ ParseField routingField = new ParseField(\"_routing\");\n+ ParseField parentField = new ParseField(\"_parent\");\n+ ParseField ttlField = new ParseField(\"_ttl\");\n+ HIT_PARSER.declareString(BasicHit::setRouting, routingField);\n+ HIT_PARSER.declareString(BasicHit::setParent, parentField);\n+ HIT_PARSER.declareLong(BasicHit::setTTL, ttlField);\n HIT_PARSER.declareLong(BasicHit::setTimestamp, new ParseField(\"_timestamp\"));\n+ // Pre-2.0.0 parent and routing come back in \"fields\"\n+ class Fields {\n+ String routing;\n+ String parent;\n+ long ttl;\n+ }\n+ ObjectParser<Fields, ParseFieldMatcherSupplier> fieldsParser = new ObjectParser<>(\"fields\", Fields::new);\n+ HIT_PARSER.declareObject((hit, fields) -> {\n+ hit.setRouting(fields.routing);\n+ hit.setParent(fields.parent);\n+ hit.setTTL(fields.ttl);\n+ }, fieldsParser, new ParseField(\"fields\"));\n+ fieldsParser.declareString((fields, routing) -> fields.routing = routing, routingField);\n+ fieldsParser.declareString((fields, parent) -> fields.parent = parent, parentField);\n+ fieldsParser.declareLong((fields, ttl) -> fields.ttl = ttl, ttlField);\n }\n \n /**", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteResponseParsers.java", "status": "modified" }, { "diff": "@@ -113,7 +113,7 @@ public void testInitialSearchParamsFields() {\n SearchRequest searchRequest = new SearchRequest().source(new SearchSourceBuilder());\n \n // Test request without any fields\n- Version remoteVersion = Version.fromId(between(0, Version.CURRENT.id));\n+ Version remoteVersion = Version.fromId(between(Version.V_2_0_0_beta1_ID, Version.CURRENT.id));\n assertThat(initialSearchParams(searchRequest, remoteVersion),\n not(either(hasKey(\"stored_fields\")).or(hasKey(\"fields\"))));\n \n@@ -125,8 +125,12 @@ public void testInitialSearchParamsFields() {\n assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry(\"stored_fields\", \"_source,_id\"));\n \n // Test fields for versions that support it\n- remoteVersion = Version.fromId(between(0, Version.V_5_0_0_alpha4_ID - 1));\n+ remoteVersion = Version.fromId(between(Version.V_2_0_0_beta1_ID, Version.V_5_0_0_alpha4_ID - 1));\n assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry(\"fields\", \"_source,_id\"));\n+\n+ // Test extra fields for versions that need it\n+ remoteVersion = Version.fromId(between(0, Version.V_2_0_0_beta1_ID - 1));\n+ assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry(\"fields\", \"_source,_id,_parent,_routing,_ttl\"));\n }\n \n public void testInitialSearchParamsMisc() {\n@@ -151,6 +155,7 @@ public void testInitialSearchParamsMisc() {\n assertThat(params, scroll == null ? not(hasKey(\"scroll\")) : hasEntry(\"scroll\", scroll.toString()));\n assertThat(params, hasEntry(\"size\", Integer.toString(size)));\n assertThat(params, fetchVersion == null || fetchVersion == true ? hasEntry(\"version\", null) : not(hasEntry(\"version\", null)));\n+ assertThat(params, hasEntry(\"_source\", \"true\"));\n }\n \n public void testInitialSearchEntity() throws IOException {", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuildersTests.java", "status": "modified" }, { "diff": "@@ -192,7 +192,7 @@ public void testParseScrollOk() throws Exception {\n }\n \n /**\n- * Test for parsing _ttl, _timestamp, and _routing.\n+ * Test for parsing _ttl, _timestamp, _routing, and _parent.\n */\n public void testParseScrollFullyLoaded() throws Exception {\n AtomicBoolean called = new AtomicBoolean();\n@@ -208,6 +208,24 @@ public void testParseScrollFullyLoaded() throws Exception {\n assertTrue(called.get());\n }\n \n+ /**\n+ * Test for parsing _ttl, _routing, and _parent. _timestamp isn't available.\n+ */\n+ public void testParseScrollFullyLoadedFrom1_7() throws Exception {\n+ AtomicBoolean called = new AtomicBoolean();\n+ sourceWithMockedRemoteCall(\"scroll_fully_loaded_1_7.json\").doStartNextScroll(\"\", timeValueMillis(0), r -> {\n+ assertEquals(\"AVToMiDL50DjIiBO3yKA\", r.getHits().get(0).getId());\n+ assertEquals(\"{\\\"test\\\":\\\"test3\\\"}\", r.getHits().get(0).getSource().utf8ToString());\n+ assertEquals((Long) 1234L, r.getHits().get(0).getTTL());\n+ assertNull(r.getHits().get(0).getTimestamp()); // Not available from 1.7\n+ assertEquals(\"testrouting\", r.getHits().get(0).getRouting());\n+ assertEquals(\"testparent\", r.getHits().get(0).getParent());\n+ called.set(true);\n+ });\n+ assertTrue(called.get());\n+ }\n+\n+\n /**\n * Versions of Elasticsearch before 2.1.0 don't support sort:_doc and instead need to use search_type=scan. Scan doesn't return\n * documents the first iteration but reindex doesn't like that. So we jump start strait to the next iteration.", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSourceTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,31 @@\n+{\n+ \"_scroll_id\" : \"DnF1ZXJ5VGhlbkZldGNoBQAAAfakescroll\",\n+ \"took\" : 3,\n+ \"timed_out\" : false,\n+ \"terminated_early\" : true,\n+ \"_shards\" : {\n+ \"total\" : 5,\n+ \"successful\" : 5,\n+ \"failed\" : 0\n+ },\n+ \"hits\" : {\n+ \"total\" : 4,\n+ \"max_score\" : null,\n+ \"hits\" : [ {\n+ \"_index\" : \"test\",\n+ \"_type\" : \"test\",\n+ \"_id\" : \"AVToMiDL50DjIiBO3yKA\",\n+ \"_version\" : 1,\n+ \"_score\" : null,\n+ \"_source\" : {\n+ \"test\" : \"test3\"\n+ },\n+ \"sort\" : [ 0 ],\n+ \"fields\" : {\n+ \"_routing\" : \"testrouting\",\n+ \"_ttl\" : 1234,\n+ \"_parent\" : \"testparent\"\n+ }\n+ } ]\n+ }\n+}", "filename": "modules/reindex/src/test/resources/responses/scroll_fully_loaded_1_7.json", "status": "added" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue. Note that whether you're filing a bug report or a\nfeature request, ensure that your submission is for an\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\nBug reports on an OS that we do not support or feature requests\nspecific to an OS that we do not support will be closed.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 2.4.1\n\n**Description of the problem including expected versus actual behavior**:\nWhen setting a Highlight query on an Inner Hits query, it throws a NullPointerException. We want to be able to set a different highlight query for the Inner hits than the main query. \n\nThis is a bit of a forced example, but it is similar to what we need it to do. The documentation describes that Inner Hits support highlighting, so I would expect it to support setting the highlight query. https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-inner-hits.html\n\n**Steps to reproduce**:\n1. Add a parent/child mapping\n\n```\ncurl -XPOST localhost:9200/test_index -d '{\n \"settings\" : {\n \"number_of_shards\" : 1\n },\n \"mappings\": {\n \"profile\": {\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n },\n \"name\": {\n \"type\": \"string\"\n }, \n \"summary\": {\n \"type\": \"string\"\n }\n }\n },\n \"tweet\": {\n \"_parent\": {\n \"type\": \"profile\"\n },\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n }, \n \"body\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}'\n\n```\n1. Add some data\n\n```\ncurl -XPUT localhost:9200/test_index/profile/1 -d '{\n \"name\" : \"Bob\",\n \"summary\" : \"The quick brown fox\"\n}'\n\n\ncurl -XPUT localhost:9200/test_index/tweet/100?parent=1 -d '{\n \"body\" : \"I like lego\"\n}'\n\ncurl -XPUT localhost:9200/test_index/tweet/200?parent=1 -d '{\n \"body\" : \"going to build some lego\"\n}'\n```\n1. Make the search\n\n```\ncurl -XPOST localhost:9200/test_index/_search -d '{\n \"query\":{\n \"has_child\":{\n \"type\":\"tweet\",\n \"query\":{\n \"match\":{\n \"body\" : \"build\"\n }\n },\n \"inner_hits\":{\n \"highlight\":{\n \"fields\":{\n \"body\":{\n \"highlight_query\":{\n \"match\":{\n \"body\":\"lego\"\n }\n }\n }\n }\n }\n }\n }\n }\n}'\n```\n\n**Provide logs (if relevant)**:\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"null_pointer_exception\",\n \"reason\": null\n }\n ],\n \"type\": \"search_phase_execution_exception\",\n \"reason\": \"all shards failed\",\n \"phase\": \"query_fetch\",\n \"grouped\": true,\n \"failed_shards\": [\n {\n \"shard\": 0,\n \"index\": \"test_index\",\n \"node\": \"kQJBHiijQCW_3FsdFssnrQ\",\n \"reason\": {\n \"type\": \"null_pointer_exception\",\n \"reason\": null\n }\n }\n ]\n },\n \"status\": 500\n}\n```\n\nThanks\n", "comments": [ { "body": "I can reproduce this on 2.4.1 but it seems to be fixed in 5.0. The NPE I see in the log with the above example on 2.4.1 is:\n\n```\n[...]\n at org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:386)\n at org.elasticsearch.action.search.SearchPhaseExecutionException.guessRootCauses(SearchPhaseExecutionException.java:152)\n at org.elasticsearch.action.search.SearchPhaseExecutionException.getCause(SearchPhaseExecutionException.java:99)\n at java.lang.Throwable.printStackTrace(Throwable.java:665)\n at java.lang.Throwable.printStackTrace(Throwable.java:721)\n at org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:60)\n at org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)\n at org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)\n at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)\n at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)\n at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)\n at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)\n at org.apache.log4j.Category.callAppenders(Category.java:206)\n at org.apache.log4j.Category.forcedLog(Category.java:391)\n at org.apache.log4j.Category.log(Category.java:856)\n at org.elasticsearch.common.logging.log4j.Log4jESLogger.internalWarn(Log4jESLogger.java:135)\n at org.elasticsearch.common.logging.support.AbstractESLogger.warn(AbstractESLogger.java:109)\n at org.elasticsearch.rest.BytesRestResponse.convert(BytesRestResponse.java:134)\n at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:96)\n at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:87)\n at org.elasticsearch.rest.action.support.RestActionListener.onFailure(RestActionListener.java:60)\n at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:95)\n at org.elasticsearch.action.search.AbstractSearchAsyncAction.raiseEarlyFailure(AbstractSearchAsyncAction.java:294)\n ... 10 more\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:251)\n at org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:320)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:223)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:218)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:856)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:667)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:633)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:472)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:392)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:389)\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n ... 3 more\n```\n", "created_at": "2016-10-21T10:24:59Z" }, { "body": "This is fixed now on the 2.4 branch with #21065. On 5.0 the problem doesn't appear any more due to a large change in the query parsing infrastructure.\n", "created_at": "2016-10-21T19:14:01Z" } ], "number": 21061, "title": "Setting highlight query on Inner Hits throwing NullPointerException" }
{ "body": "When parsing nested queries or filters in IndexQueryParserService, the parser on the cached (or provided) QueryParseContext gets reset while parsing the inner element and never gets reset to the original parser that was present before the reset. In #21061 this cause strange NPEs while parsing InnerHits with highlighting where the highlighter itself containes an inner query. On 5.0 we already fixed this problem by cleaning up QueryParseContext internals. This PR captures the original parser present before the reset and restores it after the inner parsing is done.\n\nCloses #21061\n", "number": 21065, "review_comments": [], "title": "Fix NPE when parsing InnerHits with highlight query" }
{ "commits": [ { "message": "Fix NPE when parsing InnerHits with highlight query\n\nWhen parsing nested queries or filters in IndexQueryParserService, the parser on\nthe cached (or provided) QueryParseContext gets reset while parsing the inner\nelement and never gets reset to the original parser that was present before the\nreset. In #21061 this cause strange NPEs while parsing InnerHits with\nhighlighting where the highlighter itself containes an inner query. On 5.0 we\nalready fixed this problem by cleaning up QueryParseContext internals. This PR\ncaptures the original parser present before the reset and restores it after the\ninner parsing is done.\n\nCloses #21061" } ], "files": [ { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.query;\n \n import com.google.common.collect.ImmutableMap;\n+\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.CloseableThreadLocal;\n import org.elasticsearch.Version;\n@@ -232,6 +233,7 @@ public ParsedQuery parse(QueryParseContext context, XContentParser parser) {\n @Nullable\n public ParsedQuery parseInnerFilter(XContentParser parser) throws IOException {\n QueryParseContext context = cache.get();\n+ XContentParser originalParser = context.parser();\n context.reset(parser);\n try {\n Query filter = context.parseInnerFilter();\n@@ -240,18 +242,19 @@ public ParsedQuery parseInnerFilter(XContentParser parser) throws IOException {\n }\n return new ParsedQuery(filter, context.copyNamedQueries());\n } finally {\n- context.reset(null);\n+ context.reset(originalParser);\n }\n }\n \n @Nullable\n public Query parseInnerQuery(XContentParser parser) throws IOException {\n QueryParseContext context = cache.get();\n+ XContentParser originalParser = context.parser();\n context.reset(parser);\n try {\n return context.parseInnerQuery();\n } finally {\n- context.reset(null);\n+ context.reset(originalParser);\n }\n }\n \n@@ -314,6 +317,7 @@ public ParsedQuery parseQuery(BytesReference source) {\n }\n \n private ParsedQuery innerParse(QueryParseContext parseContext, XContentParser parser) throws IOException, QueryParsingException {\n+ XContentParser originalParser = parseContext.parser();\n parseContext.reset(parser);\n try {\n parseContext.parseFieldMatcher(parseFieldMatcher);\n@@ -323,7 +327,7 @@ private ParsedQuery innerParse(QueryParseContext parseContext, XContentParser pa\n }\n return new ParsedQuery(query, parseContext.copyNamedQueries());\n } finally {\n- parseContext.reset(null);\n+ parseContext.reset(originalParser);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/query/IndexQueryParserService.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.index.query.IdsQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.index.query.support.QueryInnerHitBuilder;\n import org.elasticsearch.index.search.child.ScoreType;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.search.SearchHit;\n@@ -833,6 +834,32 @@ public void testHasChildAndHasParentFilter_withFilter() throws Exception {\n assertThat(searchResponse.getHits().hits()[0].id(), equalTo(\"2\"));\n }\n \n+ @Test\n+ public void testHasChildInnerHitsHighlighting() throws Exception {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"parent\")\n+ .addMapping(\"child\", \"_parent\", \"type=parent\"));\n+ ensureGreen();\n+\n+ client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"p_field\", 1).get();\n+ client().prepareIndex(\"test\", \"child\", \"2\").setParent(\"1\").setSource(\"c_field\", \"foo bar\").get();\n+ client().admin().indices().prepareFlush(\"test\").get();\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(hasChildQuery(\"child\", matchQuery(\"c_field\", \"foo\")).innerHit(\n+ new QueryInnerHitBuilder()\n+ .addHighlightedField(\n+ new HighlightBuilder.Field(\"c_field\").highlightQuery(QueryBuilders.matchQuery(\"c_field\", \"bar\")))\n+ )).get();\n+ assertNoFailures(searchResponse);\n+ assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n+ assertThat(searchResponse.getHits().hits()[0].id(), equalTo(\"1\"));\n+ SearchHit[] searchHits = searchResponse.getHits().hits()[0].getInnerHits().get(\"child\").hits();\n+ assertThat(searchHits.length, equalTo(1));\n+ assertThat(searchHits[0].getHighlightFields().get(\"c_field\").getFragments().length, equalTo(1));\n+ assertThat(searchHits[0].getHighlightFields().get(\"c_field\").getFragments()[0].string(), equalTo(\"foo <em>bar</em>\"));\n+ }\n+\n @Test\n public void testHasChildAndHasParentWrappedInAQueryFilter() throws Exception {\n assertAcked(prepareCreate(\"test\")", "filename": "core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java", "status": "modified" } ] }
{ "body": "Follow up for #21039.\n\nWe can revert the previous change and do that a bit smarter than it was.\n\nPatch tested successfully manually on ec2 with 2 nodes with a configuration like:\n\n``` yml\ndiscovery.type: ec2\nnetwork.host: [\"_local_\", \"_site_\", \"_ec2_\"]\ncloud.aws.region: us-west-2\n```\n", "comments": [ { "body": "@rjernst Wanna review this change?\n", "created_at": "2016-10-20T18:05:49Z" } ], "number": 21048, "title": "Fix ec2 discovery when used with IAM profiles." }
{ "body": "Applying same patch we did in #21048 but for `repository-s3` plugin.\n", "number": 21058, "review_comments": [], "title": "Fix s3 repository when used with IAM profiles" }
{ "commits": [ { "message": "Fix s3 repository when used with IAM profiles\n\nApplying same patch we did in #21048 but for `repository-s3` plugin." } ], "files": [ { "diff": "@@ -26,6 +26,7 @@\n import java.util.List;\n import java.util.Map;\n \n+import com.amazonaws.util.json.Jackson;\n import org.elasticsearch.SpecialPermission;\n import org.elasticsearch.cloud.aws.AwsS3Service;\n import org.elasticsearch.cloud.aws.InternalAwsS3Service;\n@@ -42,8 +43,6 @@\n */\n public class S3RepositoryPlugin extends Plugin implements RepositoryPlugin {\n \n- // ClientConfiguration clinit has some classloader problems\n- // TODO: fix that\n static {\n SecurityManager sm = System.getSecurityManager();\n if (sm != null) {\n@@ -53,6 +52,10 @@ public class S3RepositoryPlugin extends Plugin implements RepositoryPlugin {\n @Override\n public Void run() {\n try {\n+ // kick jackson to do some static caching of declared members info\n+ Jackson.jsonNodeOf(\"{}\");\n+ // ClientConfiguration clinit has some classloader problems\n+ // TODO: fix that\n Class.forName(\"com.amazonaws.ClientConfiguration\");\n } catch (ClassNotFoundException e) {\n throw new RuntimeException(e);", "filename": "plugins/repository-s3/src/main/java/org/elasticsearch/plugin/repository/s3/S3RepositoryPlugin.java", "status": "modified" } ] }
{ "body": "Today when parsing a request, Elasticsearch silently ignores incorrect\n(including parameters with typos) or unused parameters. This is bad as\nit leads to requests having unintended behavior (e.g., if a user hits\nthe _analyze API and misspell the \"tokenizer\" then Elasticsearch will\njust use the standard analyzer, completely against intentions).\n\nThis commit removes lenient URL parameter parsing. The strategy is\nsimple: when a request is handled and a parameter is touched, we mark it\nas such. Before the request is actually executed, we check to ensure\nthat all parameters have been consumed. If there are remaining\nparameters yet to be consumed, we fail the request with a list of the\nunconsumed parameters. An exception has to be made for parameters that\nformat the response (as opposed to controlling the request); for this\ncase, handlers are able to provide a list of parameters that should be\nexcluded from tripping the unconsumed parameters check because those\nparameters will be used in formatting the response.\n\nAdditionally, some inconsistencies between the parameters in the code\nand in the docs are corrected.\n\nCloses #14719\n", "comments": [ { "body": "I'm as torn as @s1monw on versioning. I suppose getting this into 5.1 violates semver because garbage on the URL would be identified. I'd love to have it in 5.0 but I think it violates the spirit of code freeze to try and do it. It is super tempting though.\n", "created_at": "2016-10-04T12:33:20Z" }, { "body": "After conversation with @nik9000 and @s1monw, we have decided the leniency here is big enough a bug that we want to ship this code as early as possible. The options on the table were:\n- 5.0.0\n- 5.1.0\n- 6.0.0\n\nWe felt that 5.1.0 is not a viable option, it is too breaking of a change during a minor release, and waiting until 6.0.0 is waiting too long to fix this bug so we will ship this in 5.0.0.\n", "created_at": "2016-10-04T16:14:28Z" } ], "number": 20722, "title": "Remove lenient URL parameter parsing" }
{ "body": "When indices stats are requested via the node stats API, there is a\r\nlevel parameter to request stats at the index, node, or shards\r\nlevel. This parameter was not whitelisted when URL parsing was made\r\nstrict. This commit whitelists this parameter.\r\n\r\nAdditionally, there was some leniency in the parsing of this parameter\r\nthat has been removed.\r\n\r\nRelates #20722\r\n", "number": 21024, "review_comments": [ { "body": "it would be good to have a test that verifies that the param is whitelisted?\n", "created_at": "2016-10-19T16:32:46Z" }, { "body": "should we have a test for this check? It is kinda breaking but if it goes in 5.0 it should be ok I guess.\n", "created_at": "2016-10-19T16:33:12Z" }, { "body": "We have general tests in `BaseRestHandlerTests` for verifying whitelisted parameters works. Do you still think a more specific test is needed here?\n", "created_at": "2016-10-19T16:41:56Z" }, { "body": "I think all we need is a yaml test that uses the parameter or an example of using the parameter in the docs, right? Then we test that this specific parameter is whitelisted.\n", "created_at": "2016-10-19T16:43:38Z" }, { "body": "I see. I was wondering if there was a way to catch that we forgot about it. We test that whitelisted settings work I think, so now that it is whitelisted it works for sure, but what if we un-whitelist it by mistake? :) do you see what I mean? That said I am not sure, maybe we could add a small rest test to the existing ones that leverages this param?\n", "created_at": "2016-10-19T16:46:19Z" }, { "body": "I'll add a small REST test. :smile:\n", "created_at": "2016-10-19T16:52:36Z" }, { "body": "I pushed 748235d354427c7152f4156b254cf8db8e73f1b3.\n", "created_at": "2016-10-19T16:59:18Z" }, { "body": "I pushed 62f055eee054f2a7471d92a38460d5df1f93b9eb.\n", "created_at": "2016-10-19T16:59:22Z" } ], "title": "Whitelist node stats indices level parameter" }
{ "commits": [ { "message": "Whitelist node stats indices level parameter\n\nWhen indices stats are requested via the node stats API, there is a\nlevel parameter to request stats at the index, node, or shards\nlevel. This parameter was not whitelisted when URL parsing was made\nstrict. This commit whitelists this parameter.\n\nAdditionally, there was some leniency in the parsing of this parameter\nthat has been removed." }, { "message": "Add indices stats level REST test\n\nThis commit adds a basic REST test for the level parameter on the\nindices metric for the nodes stats API." }, { "message": "Add test for invalid level on node indices stats\n\nThis commit adds a simple test that NodeIndicesStats#toXContent throws\nan IllegalArgumentException if the level parameter is invalid." }, { "message": "Fix checkstyle violation in NodeIndicesStatsTests\n\nThis commit fixes a line length violation in NodeIndicesStatsTests.java." } ], "files": [ { "diff": "@@ -188,10 +188,11 @@ public void writeTo(StreamOutput out) throws IOException {\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- String level = params.param(\"level\", \"node\");\n- boolean isLevelValid = \"node\".equalsIgnoreCase(level) || \"indices\".equalsIgnoreCase(level) || \"shards\".equalsIgnoreCase(level);\n+ final String level = params.param(\"level\", \"node\");\n+ final boolean isLevelValid =\n+ \"indices\".equalsIgnoreCase(level) || \"node\".equalsIgnoreCase(level) || \"shards\".equalsIgnoreCase(level);\n if (!isLevelValid) {\n- return builder;\n+ throw new IllegalArgumentException(\"level parameter must be one of [indices] or [node] or [shards] but was [\" + level + \"]\");\n }\n \n // \"node\" level", "filename": "core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java", "status": "modified" }, { "diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.rest.action.RestActions.NodesResponseRestListener;\n \n import java.io.IOException;\n+import java.util.Collections;\n import java.util.Set;\n \n import static org.elasticsearch.rest.RestRequest.Method.GET;\n@@ -114,8 +115,16 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC\n return channel -> client.admin().cluster().nodesStats(nodesStatsRequest, new NodesResponseRestListener<>(channel));\n }\n \n+ private final Set<String> RESPONSE_PARAMS = Collections.singleton(\"level\");\n+\n+ @Override\n+ protected Set<String> responseParams() {\n+ return RESPONSE_PARAMS;\n+ }\n+\n @Override\n public boolean canTripCircuitBreaker() {\n return false;\n }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestNodesStatsAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,42 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.indices;\n+\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.Collections;\n+\n+import static org.hamcrest.CoreMatchers.containsString;\n+import static org.hamcrest.object.HasToString.hasToString;\n+\n+public class NodeIndicesStatsTests extends ESTestCase {\n+\n+ public void testInvalidLevel() {\n+ final NodeIndicesStats stats = new NodeIndicesStats();\n+ final String level = randomAsciiOfLength(16);\n+ final ToXContent.Params params = new ToXContent.MapParams(Collections.singletonMap(\"level\", level));\n+ final IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> stats.toXContent(null, params));\n+ assertThat(\n+ e,\n+ hasToString(containsString(\"level parameter must be one of [indices] or [node] or [shards] but was [\" + level + \"]\")));\n+ }\n+\n+}", "filename": "core/src/test/java/org/elasticsearch/indices/NodeIndicesStatsTests.java", "status": "added" }, { "diff": "@@ -52,8 +52,8 @@\n },\n \"level\": {\n \"type\" : \"enum\",\n- \"description\": \"Return indices stats aggregated at node, index or shard level\",\n- \"options\" : [\"node\", \"indices\", \"shards\"],\n+ \"description\": \"Return indices stats aggregated at index, node or shard level\",\n+ \"options\" : [\"indices\", \"node\", \"shards\"],\n \"default\" : \"node\"\n },\n \"types\" : {", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/nodes.stats.json", "status": "modified" }, { "diff": "@@ -6,3 +6,17 @@\n \n - is_true: cluster_name\n - is_true: nodes\n+\n+---\n+\"Nodes stats level\":\n+ - do:\n+ cluster.state: {}\n+\n+ - set: { master_node: master }\n+\n+ - do:\n+ nodes.stats:\n+ metric: [ indices ]\n+ level: \"indices\"\n+\n+ - is_true: nodes.$master.indices.indices", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/nodes.stats/10_basic.yaml", "status": "modified" } ] }
{ "body": "_Note: Source is https://discuss.elastic.co/t/es-5-0-rc1-sorting-by-script-prevent-cachability-are-disabled-problem/63298_\n\n**Elasticsearch version**: 5.0.0-rc1\n\n**Plugins installed**: None\n\n**JVM version**: 1.8.0_102\n\n**OS version**: Mac OS X 15.6.0\n\n**Description of the problem including expected versus actual behavior**:\n\nConsider the following query:\n\n``` json\nGET /_search\n{\n \"stored_fields\": [\n \"_type\",\n \"_id\"\n ],\n \"query\": {\n \"bool\": {\n \"must\": [\n {\n \"match_all\": {}\n }\n ],\n \"filter\": [\n {\n \"range\": {\n \"echeance_max\": {\n \"gte\": \"0\"\n }\n }\n }\n ]\n }\n },\n \"aggs\": {\n \"par_types\": {\n \"terms\": {\n \"field\": \"type\"\n },\n \"aggs\": {\n \"lesplus_proches\": {\n \"top_hits\": {\n \"sort\": [\n {\n \"_script\": {\n \"type\": \"number\",\n \"script\": {\n \"lang\": \"painless\",\n \"inline\": \"floor(abs((doc['altitude'].value)-search_altitude)/10)\",\n \"params\": {\n \"search_altitude\": 200\n }\n },\n \"order\": \"asc\"\n }\n },\n {\n \"geodistance\": {\n \"geoloc\": {\n \"lat\": 39.9173,\n \"lon\": 116.386\n },\n \"order\": \"asc\",\n \"unit\": \"km\"\n }\n }\n ],\n \"size\": 1\n }\n }\n }\n }\n }\n}\n```\n\nThis produces:\n\n``` json\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"features that prevent cachability are disabled on this context\"\n }\n ]\n }\n}\n```\n", "comments": [ { "body": "@danielmitterdorfer thanks, I'm looking into this now\n", "created_at": "2016-10-19T14:56:08Z" }, { "body": "@danielmitterdorfer Also note that you cannot reference `search_altitude` like that in painless, you need to reference it from params like `params.search_altitude`.\n", "created_at": "2016-10-19T16:03:48Z" }, { "body": "> @danielmitterdorfer Also note that you cannot reference search_altitude like that in painless, you need to reference it from params like params.search_altitude.\n\n@rjernst Thanks for the hint. This was just a 1:1 copy of the example of the user. I'll forward this info on Discuss.\n", "created_at": "2016-10-20T06:46:15Z" } ], "number": 21022, "title": "script scoring in top hits aggregation produces IllegalArgumentException" }
{ "body": "Previous to this change any request using a script sort in a top_hits\naggregation would fail because the compilation of the script happened\nafter the QueryShardContext was frozen (after we had worked out if the\nrequest is cachable).\n\nThis change moves the calling of build() on the SortBuilder to the\nTopHitsAggregationBuilder which means that the script in the script_sort\nwill be compiled before we decide whether to cache the request and freeze\nthe context.\n\nCloses #21022\n", "number": 21023, "review_comments": [ { "body": "any chance we can get this as a rest test?\n", "created_at": "2016-10-19T15:38:06Z" }, { "body": "Unfortunately not as the rest tests do not install the modules so you can't currently tests scripts in rest tests\n", "created_at": "2016-10-19T16:34:40Z" } ], "title": "Fixes bug preventing script sort working on top_hits aggregation" }
{ "commits": [ { "message": "Fixes bug preventing script sort working on top_hits aggregation\n\nPrevious to this change any request using a script sort in a top_hits\naggregation would fail because the compilation of the script happened\nafter the QueryShardContext was frozen (after we had worked out if the\nrequest is cachable).\n\nThis change moves the calling of build() on the SortBuilder to the\nTopHitsAggregationBuilder which means that the script in the script_sort\nwill be compiled before we decide whether to cache the request and freeze\nthe context.\n\nCloses #21022" } ], "files": [ { "diff": "@@ -44,6 +44,7 @@\n import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext;\n import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;\n import org.elasticsearch.search.sort.ScoreSortBuilder;\n+import org.elasticsearch.search.sort.SortAndFormats;\n import org.elasticsearch.search.sort.SortBuilder;\n import org.elasticsearch.search.sort.SortBuilders;\n import org.elasticsearch.search.sort.SortOrder;\n@@ -54,6 +55,7 @@\n import java.util.HashSet;\n import java.util.List;\n import java.util.Objects;\n+import java.util.Optional;\n import java.util.Set;\n \n public class TopHitsAggregationBuilder extends AbstractAggregationBuilder<TopHitsAggregationBuilder> {\n@@ -539,9 +541,15 @@ protected TopHitsAggregatorFactory doBuild(AggregationContext context, Aggregato\n field.fieldName(), searchScript, field.ignoreFailure()));\n }\n }\n- return new TopHitsAggregatorFactory(name, type, from, size, explain, version, trackScores, sorts, highlightBuilder,\n- storedFieldsContext, fieldDataFields, fields, fetchSourceContext, context,\n- parent, subfactoriesBuilder, metaData);\n+\n+ final Optional<SortAndFormats> optionalSort;\n+ if (sorts == null) {\n+ optionalSort = Optional.empty();\n+ } else {\n+ optionalSort = SortBuilder.buildSort(sorts, context.searchContext().getQueryShardContext());\n+ }\n+ return new TopHitsAggregatorFactory(name, type, from, size, explain, version, trackScores, optionalSort, highlightBuilder,\n+ storedFieldsContext, fieldDataFields, fields, fetchSourceContext, context, parent, subfactoriesBuilder, metaData);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -32,8 +32,6 @@\n import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;\n import org.elasticsearch.search.internal.SubSearchContext;\n import org.elasticsearch.search.sort.SortAndFormats;\n-import org.elasticsearch.search.sort.SortBuilder;\n-\n import java.io.IOException;\n import java.util.List;\n import java.util.Map;\n@@ -46,15 +44,15 @@ public class TopHitsAggregatorFactory extends AggregatorFactory<TopHitsAggregato\n private final boolean explain;\n private final boolean version;\n private final boolean trackScores;\n- private final List<SortBuilder<?>> sorts;\n+ private final Optional<SortAndFormats> sort;\n private final HighlightBuilder highlightBuilder;\n private final StoredFieldsContext storedFieldsContext;\n private final List<String> docValueFields;\n private final List<ScriptFieldsContext.ScriptField> scriptFields;\n private final FetchSourceContext fetchSourceContext;\n \n public TopHitsAggregatorFactory(String name, Type type, int from, int size, boolean explain, boolean version, boolean trackScores,\n- List<SortBuilder<?>> sorts, HighlightBuilder highlightBuilder, StoredFieldsContext storedFieldsContext,\n+ Optional<SortAndFormats> sort, HighlightBuilder highlightBuilder, StoredFieldsContext storedFieldsContext,\n List<String> docValueFields, List<ScriptFieldsContext.ScriptField> scriptFields, FetchSourceContext fetchSourceContext,\n AggregationContext context, AggregatorFactory<?> parent, AggregatorFactories.Builder subFactories, Map<String, Object> metaData)\n throws IOException {\n@@ -64,7 +62,7 @@ public TopHitsAggregatorFactory(String name, Type type, int from, int size, bool\n this.explain = explain;\n this.version = version;\n this.trackScores = trackScores;\n- this.sorts = sorts;\n+ this.sort = sort;\n this.highlightBuilder = highlightBuilder;\n this.storedFieldsContext = storedFieldsContext;\n this.docValueFields = docValueFields;\n@@ -82,11 +80,8 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu\n subSearchContext.trackScores(trackScores);\n subSearchContext.from(from);\n subSearchContext.size(size);\n- if (sorts != null) {\n- Optional<SortAndFormats> optionalSort = SortBuilder.buildSort(sorts, subSearchContext.getQueryShardContext());\n- if (optionalSort.isPresent()) {\n- subSearchContext.sort(optionalSort.get());\n- }\n+ if (sort.isPresent()) {\n+ subSearchContext.sort(sort.get());\n }\n if (storedFieldsContext != null) {\n subSearchContext.storedFieldsContext(storedFieldsContext);", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java", "status": "modified" }, { "diff": "@@ -48,6 +48,7 @@\n import org.elasticsearch.search.aggregations.metrics.tophits.TopHits;\n import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;\n import org.elasticsearch.search.fetch.subphase.highlight.HighlightField;\n+import org.elasticsearch.search.sort.ScriptSortBuilder.ScriptSortType;\n import org.elasticsearch.search.sort.SortBuilders;\n import org.elasticsearch.search.sort.SortOrder;\n import org.elasticsearch.test.ESIntegTestCase;\n@@ -1010,11 +1011,23 @@ public void testDontCacheScripts() throws Exception {\n assertThat(client().admin().indices().prepareStats(\"cache_test_idx\").setRequestCache(true).get().getTotal().getRequestCache()\n .getMissCount(), equalTo(0L));\n \n- // Test that a request using a script does not get cached\n+ // Test that a request using a script field does not get cached\n SearchResponse r = client().prepareSearch(\"cache_test_idx\").setSize(0)\n .addAggregation(topHits(\"foo\").scriptField(\"bar\", new Script(\"5\", ScriptType.INLINE, CustomScriptPlugin.NAME, null))).get();\n assertSearchResponse(r);\n \n+ assertThat(client().admin().indices().prepareStats(\"cache_test_idx\").setRequestCache(true).get().getTotal().getRequestCache()\n+ .getHitCount(), equalTo(0L));\n+ assertThat(client().admin().indices().prepareStats(\"cache_test_idx\").setRequestCache(true).get().getTotal().getRequestCache()\n+ .getMissCount(), equalTo(0L));\n+\n+ // Test that a request using a script sort does not get cached\n+ r = client().prepareSearch(\"cache_test_idx\").setSize(0)\n+ .addAggregation(topHits(\"foo\").sort(\n+ SortBuilders.scriptSort(new Script(\"5\", ScriptType.INLINE, CustomScriptPlugin.NAME, null), ScriptSortType.STRING)))\n+ .get();\n+ assertSearchResponse(r);\n+\n assertThat(client().admin().indices().prepareStats(\"cache_test_idx\").setRequestCache(true).get().getTotal().getRequestCache()\n .getHitCount(), equalTo(0L));\n assertThat(client().admin().indices().prepareStats(\"cache_test_idx\").setRequestCache(true).get().getTotal().getRequestCache()", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/TopHitsIT.java", "status": "modified" } ] }
{ "body": "Today when parsing a request, Elasticsearch silently ignores incorrect\n(including parameters with typos) or unused parameters. This is bad as\nit leads to requests having unintended behavior (e.g., if a user hits\nthe _analyze API and misspell the \"tokenizer\" then Elasticsearch will\njust use the standard analyzer, completely against intentions).\n\nThis commit removes lenient URL parameter parsing. The strategy is\nsimple: when a request is handled and a parameter is touched, we mark it\nas such. Before the request is actually executed, we check to ensure\nthat all parameters have been consumed. If there are remaining\nparameters yet to be consumed, we fail the request with a list of the\nunconsumed parameters. An exception has to be made for parameters that\nformat the response (as opposed to controlling the request); for this\ncase, handlers are able to provide a list of parameters that should be\nexcluded from tripping the unconsumed parameters check because those\nparameters will be used in formatting the response.\n\nAdditionally, some inconsistencies between the parameters in the code\nand in the docs are corrected.\n\nCloses #14719\n", "comments": [ { "body": "I'm as torn as @s1monw on versioning. I suppose getting this into 5.1 violates semver because garbage on the URL would be identified. I'd love to have it in 5.0 but I think it violates the spirit of code freeze to try and do it. It is super tempting though.\n", "created_at": "2016-10-04T12:33:20Z" }, { "body": "After conversation with @nik9000 and @s1monw, we have decided the leniency here is big enough a bug that we want to ship this code as early as possible. The options on the table were:\n- 5.0.0\n- 5.1.0\n- 6.0.0\n\nWe felt that 5.1.0 is not a viable option, it is too breaking of a change during a minor release, and waiting until 6.0.0 is waiting too long to fix this bug so we will ship this in 5.0.0.\n", "created_at": "2016-10-04T16:14:28Z" } ], "number": 20722, "title": "Remove lenient URL parameter parsing" }
{ "body": "This commit removes an undocumented output parameter node_info_format\nfrom the cluster stats and node stats APIs. Currently the parameter does\nnot even work as it is not whitelisted as an output parameter. Since\nthis parameter is not documented, we opt to just remove it.\n\nRelates #20722\n", "number": 21021, "review_comments": [], "title": "Remove node_info_format parameter from node stats" }
{ "commits": [ { "message": "Remove node_info_format parameter from node stats\n\nThis commit removes an undocumented output parameter node_info_format\nfrom the cluster stats and node stats APIs. Currently the parameter does\nnot even work as it is not whitelisted as an output parameter. Since\nthis parameter is not documented, we opt to just remove it." } ], "files": [ { "diff": "@@ -249,26 +249,26 @@ public void writeTo(StreamOutput out) throws IOException {\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- if (!params.param(\"node_info_format\", \"default\").equals(\"none\")) {\n- builder.field(\"name\", getNode().getName());\n- builder.field(\"transport_address\", getNode().getAddress().toString());\n- builder.field(\"host\", getNode().getHostName());\n- builder.field(\"ip\", getNode().getAddress());\n-\n- builder.startArray(\"roles\");\n- for (DiscoveryNode.Role role : getNode().getRoles()) {\n- builder.value(role.getRoleName());\n- }\n- builder.endArray();\n-\n- if (!getNode().getAttributes().isEmpty()) {\n- builder.startObject(\"attributes\");\n- for (Map.Entry<String, String> attrEntry : getNode().getAttributes().entrySet()) {\n- builder.field(attrEntry.getKey(), attrEntry.getValue());\n- }\n- builder.endObject();\n+\n+ builder.field(\"name\", getNode().getName());\n+ builder.field(\"transport_address\", getNode().getAddress().toString());\n+ builder.field(\"host\", getNode().getHostName());\n+ builder.field(\"ip\", getNode().getAddress());\n+\n+ builder.startArray(\"roles\");\n+ for (DiscoveryNode.Role role : getNode().getRoles()) {\n+ builder.value(role.getRoleName());\n+ }\n+ builder.endArray();\n+\n+ if (!getNode().getAttributes().isEmpty()) {\n+ builder.startObject(\"attributes\");\n+ for (Map.Entry<String, String> attrEntry : getNode().getAttributes().entrySet()) {\n+ builder.field(attrEntry.getKey(), attrEntry.getValue());\n }\n+ builder.endObject();\n }\n+\n if (getIndices() != null) {\n getIndices().toXContent(builder, params);\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java", "status": "modified" } ] }
{ "body": "Today when parsing a request, Elasticsearch silently ignores incorrect\n(including parameters with typos) or unused parameters. This is bad as\nit leads to requests having unintended behavior (e.g., if a user hits\nthe _analyze API and misspell the \"tokenizer\" then Elasticsearch will\njust use the standard analyzer, completely against intentions).\n\nThis commit removes lenient URL parameter parsing. The strategy is\nsimple: when a request is handled and a parameter is touched, we mark it\nas such. Before the request is actually executed, we check to ensure\nthat all parameters have been consumed. If there are remaining\nparameters yet to be consumed, we fail the request with a list of the\nunconsumed parameters. An exception has to be made for parameters that\nformat the response (as opposed to controlling the request); for this\ncase, handlers are able to provide a list of parameters that should be\nexcluded from tripping the unconsumed parameters check because those\nparameters will be used in formatting the response.\n\nAdditionally, some inconsistencies between the parameters in the code\nand in the docs are corrected.\n\nCloses #14719\n", "comments": [ { "body": "I'm as torn as @s1monw on versioning. I suppose getting this into 5.1 violates semver because garbage on the URL would be identified. I'd love to have it in 5.0 but I think it violates the spirit of code freeze to try and do it. It is super tempting though.\n", "created_at": "2016-10-04T12:33:20Z" }, { "body": "After conversation with @nik9000 and @s1monw, we have decided the leniency here is big enough a bug that we want to ship this code as early as possible. The options on the table were:\n- 5.0.0\n- 5.1.0\n- 6.0.0\n\nWe felt that 5.1.0 is not a viable option, it is too breaking of a change during a minor release, and waiting until 6.0.0 is waiting too long to fix this bug so we will ship this in 5.0.0.\n", "created_at": "2016-10-04T16:14:28Z" } ], "number": 20722, "title": "Remove lenient URL parameter parsing" }
{ "body": "This commit removes an undocumented output parameter output_uuid from\nthe cluster stats API. Currently the parameter does not even work as it\nis not whitelisted as an output parameter. Since the cluster UUID is\navailable from the main action, and this parameter is not documented, we\nopt to just remove it.\n\nRelates #20722\n", "number": 21020, "review_comments": [], "title": "Remove output_uuid parameter from cluster stats" }
{ "commits": [ { "message": "Remove output_uuid parameter from cluster stats\n\nThis commit removes an undocumented output parameter output_uuid from\nthe cluster stats API. Currently the parameter does not even work as it\nis not whitelisted as an output parameter. Since the cluster UUID is\navailable from the main action, and this parameter is not documented, we\nopt to just remove it." } ], "files": [ { "diff": "@@ -37,19 +37,18 @@ public class ClusterStatsResponse extends BaseNodesResponse<ClusterStatsNodeResp\n \n ClusterStatsNodes nodesStats;\n ClusterStatsIndices indicesStats;\n- String clusterUUID;\n ClusterHealthStatus status;\n long timestamp;\n \n-\n ClusterStatsResponse() {\n }\n \n- public ClusterStatsResponse(long timestamp, ClusterName clusterName, String clusterUUID,\n- List<ClusterStatsNodeResponse> nodes, List<FailedNodeException> failures) {\n+ public ClusterStatsResponse(long timestamp,\n+ ClusterName clusterName,\n+ List<ClusterStatsNodeResponse> nodes,\n+ List<FailedNodeException> failures) {\n super(clusterName, nodes, failures);\n this.timestamp = timestamp;\n- this.clusterUUID = clusterUUID;\n nodesStats = new ClusterStatsNodes(nodes);\n indicesStats = new ClusterStatsIndices(nodes);\n for (ClusterStatsNodeResponse response : nodes) {\n@@ -81,7 +80,6 @@ public ClusterStatsIndices getIndicesStats() {\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n timestamp = in.readVLong();\n- clusterUUID = in.readString();\n // it may be that the master switched on us while doing the operation. In this case the status may be null.\n status = in.readOptionalWriteable(ClusterHealthStatus::readFrom);\n }\n@@ -90,7 +88,6 @@ public void readFrom(StreamInput in) throws IOException {\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeVLong(timestamp);\n- out.writeString(clusterUUID);\n out.writeOptionalWriteable(status);\n }\n \n@@ -114,9 +111,6 @@ protected void writeNodesTo(StreamOutput out, List<ClusterStatsNodeResponse> nod\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.field(\"timestamp\", getTimestamp());\n- if (params.paramAsBoolean(\"output_uuid\", false)) {\n- builder.field(\"uuid\", clusterUUID);\n- }\n if (status != null) {\n builder.field(\"status\", status.name().toLowerCase(Locale.ROOT));\n }\n@@ -141,4 +135,5 @@ public String toString() {\n return \"{ \\\"error\\\" : \\\"\" + e.getMessage() + \"\\\"}\";\n }\n }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsResponse.java", "status": "modified" }, { "diff": "@@ -72,8 +72,11 @@ public TransportClusterStatsAction(Settings settings, ThreadPool threadPool,\n @Override\n protected ClusterStatsResponse newResponse(ClusterStatsRequest request,\n List<ClusterStatsNodeResponse> responses, List<FailedNodeException> failures) {\n- return new ClusterStatsResponse(System.currentTimeMillis(), clusterService.getClusterName(),\n- clusterService.state().metaData().clusterUUID(), responses, failures);\n+ return new ClusterStatsResponse(\n+ System.currentTimeMillis(),\n+ clusterService.getClusterName(),\n+ responses,\n+ failures);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.4.0\n\n**Plugins installed**: [kopf]\n\n**JVM version**: 1.8.0_60\n\n**OS version**: OSX 10.11.6 locally, reproducible on CentOS 7.2.1511 as well\n\n**Description of the problem including expected versus actual behavior**:\n\nAfter upgrading to 2.4.0 our templates are no longer functioning from 2.3.3.\n\nWe have our template split up into several templates so we can specify default analyzers per language, and we share analyzers/filters/tokenizers between the templates.\n\n**Steps to reproduce**:\n1. Push a shared template with common analyzers/filters\n\n```\nPOST /_template/analyzers\n{\n \"template\": \"*\",\n \"order\": 20,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"untouched_analyzer\": {\n \"type\": \"custom\",\n \"tokenizer\": \"keyword\",\n \"filter\": [ \"max_length\", \"lowercase\" ]\n },\n \"no_stopwords_analyzer\": {\n \"type\": \"custom\",\n \"tokenizer\": \"standard\",\n \"filter\": [ \"max_length\", \"standard\", \"lowercase\" ]\n }\n },\n \"tokenizer\": {\n \"standard\": {\n \"type\": \"standard\",\n \"version\": \"4.4\"\n }\n },\n \"filter\": {\n \"max_length\": {\n \"type\": \"length\",\n \"max\": \"32766\"\n }\n }\n }\n }\n}\n```\n1. Push a language specific analyzer template, which is a `czech` analyzer referencing a filter from the `analyzers` template.\n\n```\nPOST /_template/analyzer-cs\n{\n \"template\": \"*-cs\",\n \"order\": 30,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"default\": {\n \"type\": \"czech\",\n \"filter\": [ \"max_length\" ]\n }\n }\n }\n }\n}\n```\n1. Push another language specific analyzer template which uses a custom analyzer, referencing some filters from the shared `analyzers` template.\n\n```\nPOST /_template/analyzer-en\n{\n \"template\": \"*-en\",\n \"order\": 30,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"default\": {\n \"type\": \"custom\",\n \"tokenizer\": \"standard\",\n \"filter\": [ \"max_length\", \"standard\", \"lowercase\", \"stop_en\" ]\n }\n },\n \"filter\": {\n \"stop_en\": {\n \"type\": \"stop\",\n \"stopwords\": \"_english_\",\n \"ignore_case\": \"true\"\n }\n }\n }\n }\n}\n```\n\nThis action fails, resulting in:\n\n```\n2016-09-13 09:52:00,174][DEBUG][action.admin.indices.template.put] [Zartra] failed to put template [analyzers-en]\n[kOccXLKbR5CwAsXCt0v_3w] IndexCreationException[failed to create index]; nested: IllegalArgumentException[Custom Analyzer [default] failed to find filter under name [max_length]];\n```\n\nHowever, as you see the first `analyzer-cs` was able to reference the `max_length` filter without issue.\n\nThese templates worked fine in 2.3.3 and 2.3.4, haven't tried 2.3.5 but it sounds like there weren't any changes to that release. Reading over release notes for 2.4.0 don't seem to indicate anything that would prevent this functionality from being supported. \n\nWe were seeing similar issues with referencing the shared analyzers in our mapping template file.\n\n**Provide logs (if relevant)**:\n\n```\n 2016-09-13 09:52:00,174][DEBUG][action.admin.indices.template.put] [Zartra] failed to put template [analyzers-en]\n [kOccXLKbR5CwAsXCt0v_3w] IndexCreationException[failed to create index]; nested: IllegalArgumentException[Custom Analyzer [default] failed to find filter under name [max_length]];\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:360)\n at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.validateAndAddTemplate(MetaDataIndexTemplateService.java:196)\n at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.access$200(MetaDataIndexTemplateService.java:57)\n at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService$2.execute(MetaDataIndexTemplateService.java:157)\n at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)\n at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n Caused by: java.lang.IllegalArgumentException: Custom Analyzer [default] failed to find filter under name [max_length]\n at org.elasticsearch.index.analysis.CustomAnalyzerProvider.build(CustomAnalyzerProvider.java:76)\n at org.elasticsearch.index.analysis.AnalysisService.<init>(AnalysisService.java:216)\n at org.elasticsearch.index.analysis.AnalysisService.<init>(AnalysisService.java:70)\n at sun.reflect.GeneratedConstructorAccessor6.newInstance(Unknown Source)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:886)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\n at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:886)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\n at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:886)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:201)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:879)\n at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)\n at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)\n at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:154)\n at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:55)\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:358)\n ... 11 more\n```\n", "comments": [ { "body": "@johtani Could you take a look at this please?\n", "created_at": "2016-09-14T16:03:08Z" }, { "body": "Hi @mbarker \n\nBefore 2.4, we didn't validate any template on index template creation.\nWe added validation on index template creation in [this PR](https://github.com/elastic/elasticsearch/pull/8802).\nIn this PR, we validate each template as a complete mapping and settings,\nwe don't check combination because we cannot check all combination when template create. \nNow, you should add the `max_length` filter setting each template for avoiding error.\n\n@clintongormley Should we support to validate any template with only `*` template?\nI think we can not predict templates combination except `*` template. \n", "created_at": "2016-09-16T02:18:22Z" }, { "body": "@johtani It seems to me like validating a template should be scoped to the context it should have available during indexing, unless I'm misunderstanding how ES supports multiple templates.\n\nWe certainly could replicate all of the analyzers/filters to all the templates where they are used, I was trying to avoid duplicating parts of the template.\n\nIf it would help I can give more details about how we are trying to compose the templates and the rationale behind it.\n", "created_at": "2016-09-16T04:54:38Z" }, { "body": "Thanks. I will think what the validation is more useful for users.\n\nValidating template on index template creation is helpful for many users because users know the error earlier than validating creating index, especially logging use-case.\nHowever in your situation, this validation is not useful because template is bigger than before 2.4.\n\ne.g. If there is skip validation flag and simulate template with index name without creating index, is it useful?\n", "created_at": "2016-09-16T05:26:01Z" }, { "body": "There is something else going on here. I (like @mbarker ) didn't understand why this template succeeded:\n\n```\nPOST /_template/analyzer-cs\n{\n \"template\": \"*-cs\",\n \"order\": 30,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"default\": {\n \"type\": \"czech\",\n \"filter\": [ \"max_length\" ]\n }\n }\n }\n }\n}\n```\n\nwhile this template didn't:\n\n```\nPOST /_template/analyzer-en\n{\n \"template\": \"*-en\",\n \"order\": 30,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"default\": {\n \"type\": \"custom\",\n \"tokenizer\": \"standard\",\n \"filter\": [\n \"max_length\"\n ]\n }\n }\n }\n }\n}\n```\n\nThe reason is that the first analyzer definition is invalid, but is not validated:\n\n```\n \"analyzer\": {\n \"default\": {\n \"type\": \"czech\",\n \"filter\": [ \"max_length\" ]\n }\n }\n```\n\nYou're using the `czech` analyzer and passing in a parameter `filter`, which the czech analyzer doesn't recognise but silently ignores. We should add validation to analyzers to complain about any left over parameters.\n", "created_at": "2016-09-19T08:52:27Z" }, { "body": "> If there is skip validation flag and simulate template with index name without creating index, is it useful?\n\nI like the idea of a skip evaluation flag. Not sure the simulate bit is needed - easy to test by just creating an index.\n", "created_at": "2016-09-19T08:54:04Z" }, { "body": "Or elasticsearch can create template always and if a template has some errors elasticsearch only return warning message. what do you think? @clintongormley \n", "created_at": "2016-09-21T03:41:36Z" }, { "body": "I'm wondering how hard it would be to figure the parent templates of a given template (because the wildcard is more general) and apply them as well at validation time.\n", "created_at": "2016-09-21T07:30:50Z" }, { "body": "You could always make up name for the index to validate based on the template matching, ie replace `*` with `test` and resolve dependencies that way.\n\nThe only caveat would be users have to upload templates in dependency order.\n", "created_at": "2016-09-21T13:21:44Z" }, { "body": "@jpountz @mbarker Thanks for your comment. Ah, you are right. I try to fix it.\n", "created_at": "2016-09-21T14:02:07Z" }, { "body": "If templates are validated using their dependencies when creating or updating, then what will happen when a template's dependency is deleted or updated? For example template A depends on B:\n1. template B created\n2. template A created\n3. template B deleted\n4. ?\n\nHeres another thought: what if a template is invalid by itself, but the index is created with explicit settings that the template depends on?\n\nAnother possible issue: What if a template overrides something in a another template that results in a final mapping that is invalid?\n", "created_at": "2016-09-21T18:15:29Z" }, { "body": "@qwerty4030 Thanks for your comment!\n\nIn your 1st case, I think it is OK that elasticsearch does not validate at template deletion time.\nWe get the error only when user creates index using template B.\n\nIn your 3rd case, we can validate overriding only the parent templates.\n\nI' not sure your 2nd case. Could you explain any examples?\n", "created_at": "2016-09-22T08:33:00Z" }, { "body": "Just tried this in 2.3.2: \n1. create template that is invalid by itself (references `custom_analyzer` that is not defined).\n2. attempt to create index that matches this template (fails with `mapper_parsing_exception` as expected).\n3. attempt to create index that matches this template **and** specifies the `custom_analyzer` in the `settings` (this works as expected).\n\nTo reproduce:\n\n```\nPUT /_template/test-template\n{\n \"template\": \"data-test-*\",\n \"order\": 0,\n \"mappings\": {\n \"test-type\": {\n \"properties\": {\n \"field\": {\n \"type\": \"string\",\n \"analyzer\": \"custom_analyzer\"\n }\n }\n }\n }\n}\n\nPUT data-test-1\n\nPUT data-test-1\n{\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"custom_analyzer\": {\n \"type\": \"custom\",\n \"tokenizer\": \"standard\",\n \"filter\": [\n \"lowercase\"\n ]\n }\n }\n }\n }\n}\n```\n", "created_at": "2016-09-22T20:38:25Z" }, { "body": "Thanks. I think Elasticsearch should return the error in your cases, because that is tricky settings.\n", "created_at": "2016-10-02T13:12:06Z" }, { "body": "As I explained in #20919, this is a design error: templates should be validated only as much as any _incomplete_ object can be validated. Elasticsearch shoud reject a template only if it contains plain inconsistencies or grammar violations.\n\nOn top of that, **this is a breaking change** which you failed to document!\n\nI have a patch that fixes the too hasty conclusion that a referred analyzer should have been declared in the same template. I consider it elegant enough for production, but I didn't bother with updating the tests.\nIf anyone is interested, I'm willing to publish it.\n", "created_at": "2016-10-14T15:21:48Z" }, { "body": "The code is in [this branch](https://github.com/acarstoiu/elasticsearch/tree/wrong-template-validation). GitHub doesn't let me to submit a pull request against a tag, neither from a particular commit. :-1: \n", "created_at": "2016-10-14T17:08:06Z" }, { "body": "> I'm wondering how hard it would be to figure the parent templates of a given template (because the wildcard is more general) and apply them as well at validation time.\n\nIt has occurred to me that we can never do this absolutely correctly. eg a template for `logs-*` may coincide with a template for `*-2016-*` or it may not. We can't tell without an index name.\n\nI'm wondering if, instead of trying to be clever, we should just allow a flag to disable template validation and template PUT time.\n\n> The code is in this branch. GitHub doesn't let me to submit a pull request against a tag, neither from a particular commit. 👎\n\n@acarstoiu there's a good reason for that... we're not planning on making a release from a patch applied to a tag. \n", "created_at": "2016-10-17T08:42:48Z" }, { "body": "@clintongormley let's try to focus on the matter, as I'm not interested in your SCM rules and I'm sure you can cherry-pick the fixing commit anytime you want. I merely justified the lack of a proper pull request.\n\nI do not expect the commit to be accepted by itself, simply because it lacks the tests counterpart. I also did my best to preserve the coding style and the existing behaviour, but might not fit your taste. Other than that, it is a _good starting point_, we use it in production.\n\nOn the other hand, what you should do _urgently_ is to document this **truely breaking change**. Thank you.\n", "created_at": "2016-10-17T15:42:07Z" }, { "body": "@acarstoiu you don't seem to get how the open source world works. Frankly, your attitude makes me disinclined to converse with you further.\n", "created_at": "2016-10-17T17:34:00Z" }, { "body": "@clintongormley you do that, I agree.\nAs for the open source world, I already did two things: served this project a patch and urged repeatedly its team to publish this **breaking change** [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes.html) for the benefit of _other users_.\n\nAnd by the way, you can find in there lots of deprecations under the \"breaking change\" label, but when a true one is discovered, you keep it hidden.\n", "created_at": "2016-10-20T14:18:11Z" }, { "body": "@clintongormley I for one agree with @acarstoiu and frankly, your dismissal of this _extremely inconvenient **non-documented** breaking change that has completely affected our cluster and must now be downgraded across the board_ is very concerning in the least. You are literally costing us time and money. \n\n> you don't seem to get how the open source world works. \n\nYou don't seem to understand how businesses work and seem to have very little regard as to the users of your product when you inexplicably hide information and then dismiss users that hit the issue because you don't like their attitude when they're justifiably irritated?\n\nIsn't one of the tenets of open source software transparency? If so, you've missed the mark on this one.\n", "created_at": "2016-11-01T23:36:44Z" }, { "body": "@clintongormley as mentioned in https://github.com/elastic/elasticsearch/issues/21105#issuecomment-263380122 I think this should be noted in the 2.4 migration docs as a breaking change. Took us a while to track down this issue while attempting to migrate our 2.3.x cluster.", "created_at": "2016-11-28T20:12:24Z" }, { "body": "@marshall007 Yes I agree - I've just been rather put off working on this issue by the comments of others. If you'd like to submit a PR adding the breaking changes docs, I'd be happy to merge it.", "created_at": "2016-11-29T12:20:02Z" }, { "body": "@clintongormley sure thing! I have it up in PR #21869.", "created_at": "2016-11-29T19:32:24Z" }, { "body": "> @clintongormley sure thing! I have it up in PR #21869.\r\n\r\nthanks @marshall007 - merged", "created_at": "2016-11-30T09:40:20Z" }, { "body": "@clintongormley @marshall007 is that PR supposed to update [this page](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/breaking-changes-2.4.html)? \r\nI'm getting a 404 for that URL. However 2.3 breaking changes URL works fine: https://www.elastic.co/guide/en/elasticsearch/reference/2.4/breaking-changes-2.3.html\r\n\r\nThanks!", "created_at": "2016-12-05T22:49:04Z" }, { "body": "Thanks @qwerty4030 - that page wasn't being included in the docs. Should be fixed now: https://www.elastic.co/guide/en/elasticsearch/reference/2.4/breaking-changes-2.4.html", "created_at": "2016-12-07T15:48:51Z" }, { "body": "@clintongormley that page still appears to be out of date, FYI. Looks like it hasn't been updated since https://github.com/elastic/elasticsearch/commit/537f4e1932b4d516cec8dcab1c942d3f31177dac.", "created_at": "2016-12-07T18:12:26Z" }, { "body": "@marshall007 it's here: https://www.elastic.co/guide/en/elasticsearch/reference/2.4/_index_templates.html", "created_at": "2016-12-12T15:03:07Z" }, { "body": "Closing in favour of #21105", "created_at": "2017-01-12T10:32:51Z" } ], "number": 20479, "title": "Custom analyzer filter scope " }
{ "body": "Introduce skip validation param for index template API.\nUsers can skip validation logic if they set `validate` to `false`.\n\nAdd flag and docs\n\nRelated #20479\n", "number": 20990, "review_comments": [ { "body": "can you throw an exception in a else block?\n", "created_at": "2016-10-21T12:41:35Z" }, { "body": "Sure. And pushed\n", "created_at": "2016-10-21T14:44:35Z" } ], "title": "Introduce skip validation flag for index template API" }
{ "commits": [ { "message": "Introduce skip validation flag for index template API\n\n Add flag and docs\n\n Related #20479" }, { "message": "Introduce skip validation flag for index template API\n\n Add else block\n\n Related #20479" }, { "message": "Introduce skip validation flag for index template API\n\nFix version issue\n\nRelated #20479" } ], "files": [ { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.action.support.master.MasterNodeRequest;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.MapBuilder;\n@@ -80,6 +81,10 @@ public class PutIndexTemplateRequest extends MasterNodeRequest<PutIndexTemplateR\n \n private Integer version;\n \n+ private boolean validation = DEFAULT_VALIDATION;\n+\n+ public static final boolean DEFAULT_VALIDATION = true;\n+\n public PutIndexTemplateRequest() {\n }\n \n@@ -316,6 +321,18 @@ public PutIndexTemplateRequest source(XContentBuilder templateBuilder) {\n }\n }\n \n+ /**\n+ * Set to <tt>true</tt> to validate\n+ */\n+ public PutIndexTemplateRequest validation(boolean validation) {\n+ this.validation = validation;\n+ return this;\n+ }\n+\n+ public boolean validation() {\n+ return validation;\n+ }\n+\n /**\n * The template source definition.\n */\n@@ -350,6 +367,14 @@ public PutIndexTemplateRequest source(Map templateSource) {\n }\n } else if (name.equals(\"aliases\")) {\n aliases((Map<String, Object>) entry.getValue());\n+ } else if (name.equals(\"validate\")) {\n+ if (entry.getValue() instanceof Boolean) {\n+ validation((Boolean)entry.getValue());\n+ } else if (entry.getValue() instanceof String) {\n+ validation(Booleans.parseBoolean((String)entry.getValue(), DEFAULT_VALIDATION));\n+ } else {\n+ throw new IllegalArgumentException(\"Malformed [validate] value, should be a boolean or a string\");\n+ }\n } else {\n // maybe custom?\n IndexMetaData.Custom proto = IndexMetaData.lookupPrototype(name);\n@@ -539,6 +564,9 @@ public void readFrom(StreamInput in) throws IOException {\n aliases.add(Alias.read(in));\n }\n version = in.readOptionalVInt();\n+ if (in.getVersion().onOrAfter(Version.V_5_4_0_UNRELEASED)) {\n+ validation = in.readBoolean();\n+ }\n }\n \n @Override\n@@ -565,5 +593,9 @@ public void writeTo(StreamOutput out) throws IOException {\n alias.writeTo(out);\n }\n out.writeOptionalVInt(version);\n+ if (out.getVersion().onOrAfter(Version.V_5_4_0_UNRELEASED)) {\n+ out.writeBoolean(validation);\n+ }\n+\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java", "status": "modified" }, { "diff": "@@ -303,4 +303,12 @@ public PutIndexTemplateRequestBuilder setSource(byte[] templateSource, int offse\n request.source(templateSource, offset, length, xContentType);\n return this;\n }\n+\n+ /**\n+ * Set to <tt>true</tt> to validate\n+ */\n+ public PutIndexTemplateRequestBuilder setValidation(boolean validation) {\n+ request.validation(validation);\n+ return this;\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java", "status": "modified" }, { "diff": "@@ -86,6 +86,7 @@ protected void masterOperation(final PutIndexTemplateRequest request, final Clus\n .aliases(request.aliases())\n .customs(request.customs())\n .create(request.create())\n+ .validation(request.validation())\n .masterTimeout(request.masterNodeTimeout())\n .version(request.version()),\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java", "status": "modified" }, { "diff": "@@ -231,7 +231,9 @@ private static void validateAndAddTemplate(final PutRequest request, IndexTempla\n mappingsForValidation.put(entry.getKey(), MapperService.parseMapping(xContentRegistry, entry.getValue()));\n }\n \n- dummyIndexService.mapperService().merge(mappingsForValidation, MergeReason.MAPPING_UPDATE, false);\n+ if (request.validation){\n+ dummyIndexService.mapperService().merge(mappingsForValidation, MergeReason.MAPPING_UPDATE, false);\n+ }\n \n } finally {\n if (createdIndex != null) {\n@@ -316,6 +318,7 @@ public static class PutRequest {\n Map<String, String> mappings = new HashMap<>();\n List<Alias> aliases = new ArrayList<>();\n Map<String, IndexMetaData.Custom> customs = new HashMap<>();\n+ boolean validation;\n \n TimeValue masterTimeout = MasterNodeRequest.DEFAULT_MASTER_NODE_TIMEOUT;\n \n@@ -373,6 +376,11 @@ public PutRequest version(Integer version) {\n this.version = version;\n return this;\n }\n+\n+ public PutRequest validation(boolean validation) {\n+ this.validation = validation;\n+ return this;\n+ }\n }\n \n public static class PutResponse {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java", "status": "modified" }, { "diff": "@@ -100,6 +100,7 @@ public void testIndexTemplateWithValidateEmptyMapping() throws Exception {\n PutRequest request = new PutRequest(\"api\", \"validate_template\");\n request.template(\"validate_template\");\n request.putMapping(\"type1\", \"{}\");\n+ request.validation(true);\n \n List<Throwable> errors = putTemplateDetail(request);\n assertThat(errors.size(), equalTo(1));\n@@ -113,17 +114,43 @@ public void testIndexTemplateWithValidateMapping() throws Exception {\n request.putMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n .startObject(\"field2\").field(\"type\", \"string\").field(\"analyzer\", \"custom_1\").endObject()\n .endObject().endObject().endObject().string());\n+ request.validation(true);\n \n List<Throwable> errors = putTemplateDetail(request);\n assertThat(errors.size(), equalTo(1));\n assertThat(errors.get(0), instanceOf(MapperParsingException.class));\n assertThat(errors.get(0).getMessage(), containsString(\"analyzer [custom_1] not found for field [field2]\"));\n }\n \n+ public void testIndexTemplateWrongMappingsWithNoValidation() throws Exception {\n+ PutRequest request = new PutRequest(\"api\", \"validate_template\");\n+ request.template(\"te*\");\n+ request.putMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n+ .startObject(\"field2\").field(\"type\", \"string\").field(\"analyzer\", \"custom_1\").endObject()\n+ .endObject().endObject().endObject().string());\n+ request.validation(false);\n+\n+ List<Throwable> errors = putTemplateDetail(request);\n+ assertThat(errors.size(), equalTo(0));\n+ }\n+\n public void testBrokenMapping() throws Exception {\n PutRequest request = new PutRequest(\"api\", \"broken_mapping\");\n request.template(\"te*\");\n request.putMapping(\"type1\", \"abcde\");\n+ request.validation(false);\n+\n+ List<Throwable> errors = putTemplateDetail(request);\n+ assertThat(errors.size(), equalTo(1));\n+ assertThat(errors.get(0), instanceOf(MapperParsingException.class));\n+ assertThat(errors.get(0).getMessage(), containsString(\"Failed to parse mapping \"));\n+ }\n+\n+ public void testBrokenMappingWithVlidation() throws Exception {\n+ PutRequest request = new PutRequest(\"api\", \"broken_mapping\");\n+ request.template(\"te*\");\n+ request.putMapping(\"type1\", \"abcde\");\n+ request.validation(true);\n \n List<Throwable> errors = putTemplateDetail(request);\n assertThat(errors.size(), equalTo(1));\n@@ -135,6 +162,17 @@ public void testBlankMapping() throws Exception {\n PutRequest request = new PutRequest(\"api\", \"blank_mapping\");\n request.template(\"te*\");\n request.putMapping(\"type1\", \"{}\");\n+ request.validation(false);\n+\n+ List<Throwable> errors = putTemplateDetail(request);\n+ assertThat(errors.size(), equalTo(0));\n+ }\n+\n+ public void testBlankMappingWithValidation() throws Exception {\n+ PutRequest request = new PutRequest(\"api\", \"blank_mapping\");\n+ request.template(\"te*\");\n+ request.putMapping(\"type1\", \"{}\");\n+ request.validation(true);\n \n List<Throwable> errors = putTemplateDetail(request);\n assertThat(errors.size(), equalTo(1));", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/template/put/MetaDataIndexTemplateServiceTests.java", "status": "modified" }, { "diff": "@@ -725,6 +725,48 @@ public void testCombineTemplates() throws Exception{\n \n }\n \n+ public void testCombineTemplatesWithoutValidation() throws Exception{\n+ // clean all templates setup by the framework.\n+ client().admin().indices().prepareDeleteTemplate(\"*\").get();\n+\n+ // check get all templates on an empty index.\n+ GetIndexTemplatesResponse response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), empty());\n+\n+ //Now, a complete mapping with two separated templates is error\n+ // base template\n+ client().admin().indices().preparePutTemplate(\"template_1\")\n+ .setTemplate(\"*\")\n+ .setSettings(\n+ \" {\\n\" +\n+ \" \\\"index\\\" : {\\n\" +\n+ \" \\\"analysis\\\" : {\\n\" +\n+ \" \\\"analyzer\\\" : {\\n\" +\n+ \" \\\"custom_1\\\" : {\\n\" +\n+ \" \\\"tokenizer\\\" : \\\"whitespace\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\")\n+ .get();\n+\n+ // put template using custom_1 analyzer\n+ client().admin().indices().preparePutTemplate(\"template_2\")\n+ .setTemplate(\"test*\")\n+ .setCreate(true)\n+ .setOrder(1)\n+ .setValidation(false)\n+ .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n+ .startObject(\"field2\").field(\"type\", \"string\").field(\"analyzer\", \"custom_1\").endObject()\n+ .endObject().endObject().endObject())\n+ .get();\n+\n+ response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), hasSize(2));\n+\n+ }\n+\n public void testOrderAndVersion() {\n int order = randomInt();\n Integer version = randomBoolean() ? randomInt() : null;", "filename": "core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java", "status": "modified" }, { "diff": "@@ -75,6 +75,10 @@ PUT _template/template_1\n <1> the `{index}` placeholder in the alias name will be replaced with the\n actual index name that the template gets applied to, during index creation.\n \n+NOTE: After 2.4, Elasticsearch validates each template as a complete mapping and settings.\n+If you have some incomplete template, e.g. analyzer settings in a template and mappings in another template,\n+you can set `validate` to `false` and skip validation that is `true` by default.\n+\n [float]\n [[delete]]\n === Deleting a Template", "filename": "docs/reference/indices/templates.asciidoc", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.0.0-rc1\n\n**Plugins installed**: []\n\n**JVM version**: Java(TM) SE Runtime Environment (build 1.8.0_101-b13)\n\n**OS version**: Ubuntu 16.04 x64\n\n**Description of the problem including expected versus actual behavior**:\n\nWhen setting up a simple monitoring solution for a test instance of elasticsearch 5.0.0 with Nagios, I observed that the Nagios check program `check_http` fails with a socket timeout error:\n\n```\nuser@host:/usr/lib/nagios/plugins# ./check_http -H elasticsearch -p 9200\nCRITICAL - Socket timeout after 10 seconds\n```\n\nThis was a bit strange since it works ok when running on older versions of elasticsearch (tested with 2.3.3).\n\nInvestigated a bit further to check what the `check_http` actually does and tried to replicate the behaviour with netcat revealed a difference between the older elasticsearch installation and the 5.0.0 installation where elasticsearch previously closes a HTTP connection if the request header contains `Connection: Close` where 5.0.0 does not.\n\nThe workaround for getting the Nagios check program to work is to use the `-N` flag that doesn't wait for the document body but simply stops reading after the headers.\n\n**Steps to reproduce**:\n1. Make an API request to elasticsearch with netcat:\n \n ```\n user@host:/# ncat elasticsearchhost 9200\n GET / HTTP/1.1\n User-Agent: check_http/v2.1.2 (monitoring-plugins 2.1.2)\n Connection: Close\n Host: elasticsearchhost\n ```\n2. Press enter to view the response\n \n ```\n HTTP/1.1 200 OK\n content-type: application/json; charset=UTF-8\n content-length: 351\n \n {\n \"name\" : \"j0-yZSR\",\n \"cluster_name\" : \"logs\",\n \"cluster_uuid\" : \"kozqvcWRSxuyRUABAKcINQ\",\n \"version\" : {\n \"number\" : \"5.0.0-rc1\",\n \"build_hash\" : \"13e62e1\",\n \"build_date\" : \"2016-10-07T16:52:58.798Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"6.2.0\"\n },\n \"tagline\" : \"You Know, for Search\"\n }\n ```\n3. Press enter twice after receiving the response. The server should close the connection but it doesn't. When trying the same procedure on 2.3.3 it closes the connection correctly.\n\nAlso, if changing the header when testing on 2.3.3 to not contain the `Connection: Close` directive, 2.3.3 correctly changes the behavior to keep-alive and will not close the connection.\n", "comments": [ { "body": "I wonder if this is a change that came with netty 4....\n", "created_at": "2016-10-14T17:12:16Z" }, { "body": "Thanks for the report. This morning I verified its validity and have a fix. I will open a pull request soon.\n\nI've marked you as eligible for the [Pioneer Program](https://www.elastic.co/blog/elastic-pioneer-program).\n", "created_at": "2016-10-15T13:47:39Z" }, { "body": "Great, thanks so much!\n", "created_at": "2016-10-15T14:31:50Z" }, { "body": "I opened #20956.\n", "created_at": "2016-10-16T15:23:09Z" } ], "number": 20938, "title": "HTTP module does not regard Connection: Close in v5.0.0" }
{ "body": "This commit fixes an issue with the handling of the value \"close\" on the\nConnection header in the Netty 4 HTTP implementation. The issue was\nusing the wrong equals method to compare an AsciiString instance and a\nString instance (they could never be equal). This commit fixes this to\nuse the correct equals method to compare for content equality.\n\nCloses #20938\n", "number": 20956, "review_comments": [], "title": "Fix connection close header handling" }
{ "commits": [ { "message": "Fix connection close header handling\n\nThis commit fixes an issue with the handling of the value \"close\" on the\nConnection header in the Netty 4 HTTP implementation. The issue was\nusing the wrong equals method to compare an AsciiString instance and a\nString instance (they could never be equal). This commit fixes this to\nuse the correct equals method to compare for content equality." } ], "files": [ { "diff": "@@ -185,7 +185,7 @@ private boolean isHttp10() {\n // Determine if the request connection should be closed on completion.\n private boolean isCloseConnection() {\n final boolean http10 = isHttp10();\n- return HttpHeaderValues.CLOSE.equals(nettyRequest.headers().get(HttpHeaderNames.CONNECTION)) ||\n+ return HttpHeaderValues.CLOSE.contentEqualsIgnoreCase(nettyRequest.headers().get(HttpHeaderNames.CONNECTION)) ||\n (http10 && HttpHeaderValues.KEEP_ALIVE.equals(nettyRequest.headers().get(HttpHeaderNames.CONNECTION)) == false);\n }\n ", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpChannel.java", "status": "modified" }, { "diff": "@@ -30,10 +30,12 @@\n import io.netty.channel.ChannelProgressivePromise;\n import io.netty.channel.ChannelPromise;\n import io.netty.channel.EventLoop;\n+import io.netty.channel.embedded.EmbeddedChannel;\n import io.netty.handler.codec.http.DefaultFullHttpRequest;\n import io.netty.handler.codec.http.FullHttpRequest;\n import io.netty.handler.codec.http.FullHttpResponse;\n import io.netty.handler.codec.http.HttpHeaderNames;\n+import io.netty.handler.codec.http.HttpHeaderValues;\n import io.netty.handler.codec.http.HttpMethod;\n import io.netty.handler.codec.http.HttpResponse;\n import io.netty.handler.codec.http.HttpVersion;\n@@ -212,6 +214,26 @@ public void testHeadersSet() {\n }\n }\n \n+ public void testConnectionClose() throws Exception {\n+ final Settings settings = Settings.builder().build();\n+ try (Netty4HttpServerTransport httpServerTransport =\n+ new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool)) {\n+ httpServerTransport.start();\n+ final FullHttpRequest httpRequest = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, \"/\");\n+ httpRequest.headers().add(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);\n+ final EmbeddedChannel embeddedChannel = new EmbeddedChannel();\n+ final Netty4HttpRequest request = new Netty4HttpRequest(httpRequest, embeddedChannel);\n+\n+ // send a response, the channel should close\n+ assertTrue(embeddedChannel.isOpen());\n+ final Netty4HttpChannel channel =\n+ new Netty4HttpChannel(httpServerTransport, request, null, randomBoolean(), threadPool.getThreadContext());\n+ final TestResponse resp = new TestResponse();\n+ channel.sendResponse(resp);\n+ assertFalse(embeddedChannel.isOpen());\n+ }\n+ }\n+\n private FullHttpResponse executeRequest(final Settings settings, final String host) {\n return executeRequest(settings, null, host);\n }", "filename": "modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpChannelTests.java", "status": "modified" } ] }
{ "body": "I was looking through the indices on one of our hosts and saw some indices that started with - (dash), eg \"-2016.04.15\". I'm not sure why it was there - but not to critical, elasticsearch lets you do it.\n\n```\nPOST -2016.12.12/test\n{\n \"name\":\"abc\"\n}\n```\n\nI tried to delete these indices by issuing\n `DELETE -2016.*`\n\nThe problem is that this was interpreted as \n`DELETE everything except for indices starting with 2016.`\nwhich basically means delete the entire database - and after a few poignant seconds, that's what it did. \n\nI have since become acquainted with https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-index.html and the ability to include or exclude indices with the + or - operator, but it seems that this is more dangerous than useful, at least if you are unfortunate enough to have indices that start with -.\n\nI understand that it's a \"feature\", but it doesn't seems practically so useful. Perhaps there could be a special query string for DELETE like \"wildcard=inclusive\" or \"=exclusive\"... As it is now, I'm not even sure how I would delete the indices that start with \"-2016.\" I can't do \"+-2016.*\"\n", "comments": [ { "body": "So curiously, you can `DELETE -whole_index_name`. It's only if you specify the wildcard that the `+/-` behaviour kicks in. This in itself sounds like a bug.\n\nIn order to remove ambiguity, I think we should prevent index names starting with `+` or `-`.\n\nRelated https://github.com/elastic/elasticsearch/issues/9059\n", "created_at": "2016-08-04T12:47:21Z" }, { "body": "yes.. that would be fine.\n", "created_at": "2016-08-04T13:33:39Z" }, { "body": "Discussed in Fix It Friday and we agreed that we should fix the bug that the `+/-` behaviour does not work unless there is a wildcard, and also prevent index names starting with a `+` or `-`\n", "created_at": "2016-08-05T09:49:54Z" }, { "body": "@colings86 \nI'd like to pull request for this issue. but I have some questions to ask:\n1. what do u mean by \"the +/- behaviour does not work unless there is a wildcard\", let's say I had only one index \"twitter\", and I delete with name \"+twitter\", what will happen? take as index not found or delete twitter?\n", "created_at": "2016-08-09T09:04:26Z" }, { "body": "@colings86 \nalso is there anywhere ES define \"wildcard\"?\n", "created_at": "2016-08-09T09:08:43Z" }, { "body": "FWIW - the case of using an exclusion in the index name in the docs was together with an inclusion - `+test*,-test3` I question the usefulness of an exclusion by itself. How often do you really want to do `DELETE -test-*` and doing a `+test*` is not needed, because it's inherently \"+\". You would just do `DELETE test*`\n\nI would offer that because the implications of someone misunderstanding and and its questionable need, perhaps you should consider putting it in a query string option, then it's easier that it's more intentional. eg to do the command that would now be `DELETE -test-*`, instead do a `DELETE *?exclude=test-*`. Then it's much more obvious what you're doing but you still have the same power. \n\nThe truth is that this is really mainly a problem with DELETE, perhaps these changes should just be made here.\n", "created_at": "2016-08-09T09:52:55Z" }, { "body": "note - some of this discussion is still relevant even if you remove dashes from the start of queries. DELETE -test\\* is still just as dangerous and might not be obvious to some users what would happen.\n", "created_at": "2016-08-09T09:57:18Z" } ], "number": 19800, "title": "indices starting with - (dash) cause problems if used with wildcards" }
{ "body": "There is currently a very confusing behavior in Elasticsearch for the\nfollowing:\n\nGiven the indices: `[test1, test2, -foo1, -foo2]`\n\n```\nDELETE /-foo*\n```\n\nWill cause the `test1` and `test2` indices to be deleted, when what is\nusually intended is to delete the `-foo1` and `-foo2` indices.\n\nPreviously we added a change in #20033 to disallow creating indices\nstarting with `-` or `+`, which will help with this situation. However,\nusers may have existing indices starting with these characters.\n\nThis changes the negation to only take effect in a wildcard (`*`) has\nbeen seen somewhere in the expression, so in order to delete `-foo1` and\n`-foo2` the following now works:\n\n```\nDELETE /-foo*\n```\n\nAs well as:\n\n```\nDELETE /-foo1,-foo2\n```\n\nso in order to actually delete everything except for the \"foo\" indices\n(ie, `test1` and `test2`) a user would now issue:\n\n```\nDELETE /*,--foo*\n```\n\nRelates to #19800\n", "number": 20898, "review_comments": [ { "body": "what happens with `+test1, +test2, +test3,-test2` ? it looks silly but should be supported and resolve to test1 and test3 only. I think not only a wildcard expression can be before a negation, but there has to be something, anything that is not a negation?\n", "created_at": "2016-10-14T16:53:59Z" }, { "body": "At this point I wonder if the order should still matter as it used to. Maybe negations should just subtract from the rest of the indices mentioned in the expression, with the requirement that an expression cannot be a negation only, otherwise we interpret it as an explicit index name that contains `-`. Should `-test1,test*` resolve to the same indices as `test*,-test1`? What other edge cases am I missing here? I also wonder why we support `+` as it is implicit anyway when missing.\n", "created_at": "2016-10-14T17:00:48Z" }, { "body": "> what happens with `+test1, +test2, +test3,-test2`\n\nThat would resolve to: `[\"test1\", \"test2\", \"test3\", \"-test2\"]`, assuming a `-test2` index exists, which I think is the correct behavior?\n\nIf `-test2` doesn't exist then it won't be negated and in that case I'm not sure what we want to do, do you think we should support this (adding and then removing an index without wildcards)?\n", "created_at": "2016-10-14T17:18:35Z" }, { "body": "> `+` as it is implicit anyway when missing\n\n+1 to remove `+` as an operator in a followup for 6.0-only, it's just confusing\n", "created_at": "2016-10-14T17:36:38Z" }, { "body": "> That would resolve to: [\"test1\", \"test2\", \"test3\", \"-test2\"], assuming a -test2 index exists, which I think is the correct behavior?\n\nI see, yea that's ok, just different compared to before (we would previously add and remove `test2`, but there was no way to refer to `-test2` if existing I believe), but correct. Do we have a test for this too?\n\nI think documenting all this under the api conventions page would clear things up.\n", "created_at": "2016-10-14T19:30:09Z" }, { "body": "How about the ordering thing that I mentioned before? Should `-test1,test*` resolve to the same indices as `test*,-test1`? It doesn't now right?\n", "created_at": "2016-10-14T19:31:46Z" }, { "body": "> Should `-test1,test*` resolve to the same indices as `test*,-test1`? It doesn't now right?\n\nIt doesn't right now, I think it would depend on if a `-test1` index existed.\n\nI don't think it should though, since the intended behavior would be different regardless if you swapped the ordering of negations? What do you think?\n\nMaybe instead we should start with a list of indices and work backwards towards what we expect for certain combinations?\n", "created_at": "2016-10-16T03:01:11Z" }, { "body": "I'm afraid we need to rely on the order if we want to be able to distinguish between negations (applied when a wildcard expression appears before the negation) and referring to indices that start with `-`. We will be able to get rid of it in 6.0 only when we will be sure such indices are not around anymore. I opened #20962.\n\nCan we also have a test where the wildcard expression is not the first expression but still before the negation? e.g. `test1,test2,index*,-index1`\n", "created_at": "2016-10-17T08:09:17Z" }, { "body": "Also `-test1,*test2*,-test20` or something along those lines? :)\n", "created_at": "2016-10-17T08:25:24Z" }, { "body": "Sure, I've added both of those tests :)\n", "created_at": "2016-10-17T15:43:30Z" }, { "body": "do you understand this if block? I suspect it made sense before your change, to make it possible to refer to existing indices or aliases that started with `-` rather than treating the name as a negation. That said, I can't quite follow why it makes sense in some cases for `result` to be `null`. This was already there, not your doing but I wonder if it's related and may be cleaned up.\n", "created_at": "2016-10-17T20:47:30Z" }, { "body": "can we use the existing `Regex.isSimpleMatchPattern` instead?\n", "created_at": "2016-10-17T20:49:41Z" }, { "body": "I wonder if we should set this to true only if options.expandWildcardsOpen || options.expandWildcardsClosed . Otherwise wildcard expressions don't get expanded...\n", "created_at": "2016-10-17T20:51:20Z" }, { "body": "I don't know why it doesn't initialize the `result` set like the other null checks do, but I do think _all_ of this code should be cleaned up after this (I hesitate to touch more code as this is supposed to be backported all the way back to 2.4)\n", "created_at": "2016-10-17T21:05:18Z" }, { "body": "Certainly, pushed something that does that\n", "created_at": "2016-10-17T21:10:59Z" }, { "body": "I agree, I've changed the `if` to check these\n", "created_at": "2016-10-17T21:11:02Z" }, { "body": "add some tests around checking the options?\n", "created_at": "2016-10-17T21:44:37Z" }, { "body": "I actually reverted the change to add the check, because if wildcards are both disabled then this check is already handled at a higher level, in `resolve` instead of `innerResolve`:\n\nhttps://github.com/elastic/elasticsearch/blob/e57720e09170208e8059c39d43b848024d79ed9d/core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java#L559-L561\n", "created_at": "2016-10-17T22:46:45Z" } ], "title": "Only negate index expression on all indices with preceding wildcard" }
{ "commits": [ { "message": "Only negate index expression on all indices with preceding wildcard\n\nThere is currently a very confusing behavior in Elasticsearch for the\nfollowing:\n\nGiven the indices: `[test1, test2, -foo1, -foo2]`\n\n```\nDELETE /-foo*\n```\n\nWill cause the `test1` and `test2` indices to be deleted, when what is\nusually intended is to delete the `-foo1` and `-foo2` indices.\n\nPreviously we added a change in #20033 to disallow creating indices\nstarting with `-` or `+`, which will help with this situation. However,\nusers may have existing indices starting with these characters.\n\nThis changes the negation to only take effect in a wildcard (`*`) has\nbeen seen somewhere in the expression, so in order to delete `-foo1` and\n`-foo2` the following now works:\n\n```\nDELETE /-foo*\n```\n\nAs well as:\n\n```\nDELETE /-foo1,-foo2\n```\n\nso in order to actually delete everything except for the \"foo\" indices\n(ie, `test1` and `test2`) a user would now issue:\n\n```\nDELETE /*,--foo*\n```\n\nRelates to #19800" }, { "message": "Fix WildcardExpressionResolverTests" }, { "message": "Assert that non-existent negated indices are correctly non-expanded" }, { "message": "Add more tests for corner cases" }, { "message": "Add additional test cases" }, { "message": "Add more additional tests" }, { "message": "Check expression with Regex.isSimpleMatchPattern and only check when expand=true" }, { "message": "No need to check for expandWildcardsOpen and expandWildcardsClosed" } ], "files": [ { "diff": "@@ -579,6 +579,7 @@ public List<String> resolve(Context context, List<String> expressions) {\n \n private Set<String> innerResolve(Context context, List<String> expressions, IndicesOptions options, MetaData metaData) {\n Set<String> result = null;\n+ boolean wildcardSeen = false;\n for (int i = 0; i < expressions.size(); i++) {\n String expression = expressions.get(i);\n if (aliasOrIndexExists(metaData, expression)) {\n@@ -598,13 +599,14 @@ private Set<String> innerResolve(Context context, List<String> expressions, Indi\n }\n expression = expression.substring(1);\n } else if (expression.charAt(0) == '-') {\n- // if its the first, fill it with all the indices...\n- if (i == 0) {\n- List<String> concreteIndices = resolveEmptyOrTrivialWildcard(options, metaData, false);\n- result = new HashSet<>(concreteIndices);\n+ // if there is a negation without a wildcard being previously seen, add it verbatim,\n+ // otherwise return the expression\n+ if (wildcardSeen) {\n+ add = false;\n+ expression = expression.substring(1);\n+ } else {\n+ add = true;\n }\n- add = false;\n- expression = expression.substring(1);\n }\n if (result == null) {\n // add all the previous ones...\n@@ -634,6 +636,10 @@ private Set<String> innerResolve(Context context, List<String> expressions, Indi\n if (!noIndicesAllowedOrMatches(options, matches)) {\n throw infe(expression);\n }\n+\n+ if (Regex.isSimpleMatchPattern(expression)) {\n+ wildcardSeen = true;\n+ }\n }\n return result;\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java", "status": "modified" }, { "diff": "@@ -305,7 +305,7 @@ public void testIndexOptionsWildcardExpansion() {\n assertEquals(1, results.length);\n assertEquals(\"bar\", results[0]);\n \n- results = indexNameExpressionResolver.concreteIndexNames(context, \"-foo*\");\n+ results = indexNameExpressionResolver.concreteIndexNames(context, \"*\", \"-foo*\");\n assertEquals(1, results.length);\n assertEquals(\"bar\", results[0]);\n \n@@ -585,6 +585,64 @@ public void testConcreteIndicesWildcardExpansion() {\n assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"testX*\")), equalTo(newHashSet(\"testXXX\", \"testXXY\", \"testXYY\")));\n }\n \n+ public void testConcreteIndicesWildcardWithNegation() {\n+ MetaData.Builder mdBuilder = MetaData.builder()\n+ .put(indexBuilder(\"testXXX\").state(State.OPEN))\n+ .put(indexBuilder(\"testXXY\").state(State.OPEN))\n+ .put(indexBuilder(\"testXYY\").state(State.OPEN))\n+ .put(indexBuilder(\"-testXYZ\").state(State.OPEN))\n+ .put(indexBuilder(\"-testXZZ\").state(State.OPEN))\n+ .put(indexBuilder(\"-testYYY\").state(State.OPEN))\n+ .put(indexBuilder(\"testYYY\").state(State.OPEN))\n+ .put(indexBuilder(\"testYYX\").state(State.OPEN));\n+ ClusterState state = ClusterState.builder(new ClusterName(\"_name\")).metaData(mdBuilder).build();\n+\n+ IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state,\n+ IndicesOptions.fromOptions(true, true, true, true));\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"testX*\")),\n+ equalTo(newHashSet(\"testXXX\", \"testXXY\", \"testXYY\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"test*\", \"-testX*\")),\n+ equalTo(newHashSet(\"testYYY\", \"testYYX\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"-testX*\")),\n+ equalTo(newHashSet(\"-testXYZ\", \"-testXZZ\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"testXXY\", \"-testX*\")),\n+ equalTo(newHashSet(\"testXXY\", \"-testXYZ\", \"-testXZZ\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"*\", \"--testX*\")),\n+ equalTo(newHashSet(\"testXXX\", \"testXXY\", \"testXYY\", \"testYYX\", \"testYYY\", \"-testYYY\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"-testXXX\", \"test*\")),\n+ equalTo(newHashSet(\"testYYX\", \"testXXX\", \"testXYY\", \"testYYY\", \"testXXY\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"test*\", \"-testXXX\")),\n+ equalTo(newHashSet(\"testYYX\", \"testXYY\", \"testYYY\", \"testXXY\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"+testXXX\", \"+testXXY\", \"+testYYY\", \"-testYYY\")),\n+ equalTo(newHashSet(\"testXXX\", \"testXXY\", \"testYYY\", \"-testYYY\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"testYYY\", \"testYYX\", \"testX*\", \"-testXXX\")),\n+ equalTo(newHashSet(\"testYYY\", \"testYYX\", \"testXXY\", \"testXYY\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, \"-testXXX\", \"*testY*\", \"-testYYY\")),\n+ equalTo(newHashSet(\"testYYX\", \"testYYY\", \"-testYYY\")));\n+\n+ String[] indexNames = indexNameExpressionResolver.concreteIndexNames(state, IndicesOptions.lenientExpandOpen(), \"-doesnotexist\");\n+ assertEquals(0, indexNames.length);\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(state, IndicesOptions.lenientExpandOpen(), \"-*\")),\n+ equalTo(newHashSet(\"-testXYZ\", \"-testXZZ\", \"-testYYY\")));\n+\n+ assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(state, IndicesOptions.lenientExpandOpen(),\n+ \"+testXXX\", \"+testXXY\", \"+testXYY\", \"-testXXY\")),\n+ equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testXXY\")));\n+\n+ indexNames = indexNameExpressionResolver.concreteIndexNames(state, IndicesOptions.lenientExpandOpen(), \"*\", \"-*\");\n+ assertEquals(0, indexNames.length);\n+ }\n+\n /**\n * test resolving _all pattern (null, empty array or \"_all\") for random IndicesOptions\n */", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java", "status": "modified" }, { "diff": "@@ -50,9 +50,9 @@ public void testConvertWildcardsJustIndicesTests() {\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"*\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\", \"kuku\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"*\", \"-kuku\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"+testYYY\"))), equalTo(newHashSet(\"testXXX\", \"testYYY\")));\n- assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"-testXXX\"))).size(), equalTo(0));\n+ assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"-testXXX\"))), equalTo(newHashSet(\"testXXX\", \"-testXXX\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"+testY*\"))), equalTo(newHashSet(\"testXXX\", \"testYYY\")));\n- assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"-testX*\"))).size(), equalTo(0));\n+ assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"-testX*\"))), equalTo(newHashSet(\"testXXX\")));\n }\n \n public void testConvertWildcardsTests() {\n@@ -66,7 +66,7 @@ public void testConvertWildcardsTests() {\n \n IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testYY*\", \"alias*\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));\n- assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"-kuku\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));\n+ assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"-kuku\"))), equalTo(newHashSet(\"-kuku\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"+test*\", \"-testYYY\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"+testX*\", \"+testYYY\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"+testYYY\", \"+testX*\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/WildcardExpressionResolverTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.3\n\nA multi_match query that is otherwise proper, but has an array of values for the \"query\" parameter will execute successfully, but only actually perform the search using the last value of the array.\n\nShould it return an error instead?\n\n**Steps to reproduce**:\n\n```\n# cleanup\nDELETE and_match\n# create index/mapping\nPOST and_match\n{\n \"settings\": {\n \"number_of_shards\": 1,\n \"number_of_replicas\": 0\n },\n \"mappings\": {\n \"type1\": {\n \"properties\": {\n \"tags\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n# index\nPOST and_match/_bulk\n{ \"index\" : { \"_type\" : \"type1\"} }\n{ \"tags\" : \"machine\"}\n{ \"index\" : { \"_type\" : \"type1\"} }\n{ \"tags\" : \"broken\"}\n{ \"index\" : { \"_type\" : \"type1\"} }\n{ \"tags\" : \"broken machine\"}\n# search\nPOST /and_match/_search\n{\n \"query\": {\n \"multi_match\": {\n \"query\": [\"broken\", \"machine\"], \n \"fields\": [\"tags\"]\n }\n }\n}\n```\n\nSearch results don't include the 2nd of the 3 documents.\n", "comments": [ { "body": "@GlenRSmith Can you try with the 5.0 beta? From glancing at the code (which was heavily refactored for 5.0) I believe it will now produce an error.\n", "created_at": "2016-10-06T19:54:20Z" }, { "body": "@rjernst Yep. In 5.0.0 beta1, the search produces an error.\n", "created_at": "2016-10-06T22:28:25Z" }, { "body": "Since this is already fixed in 5.0 I opened against 2.4 only. The fix is quiet straight forward, it should throw a QueryParsingException when encountering an array after any parameter other than the \"fields\" parameter.\n", "created_at": "2016-10-12T12:58:09Z" }, { "body": "Closed by 91f94e6 on 2.4 branch.\n", "created_at": "2016-10-12T16:53:14Z" } ], "number": 20785, "title": "multi_match accepts an array of values for \"query\"" }
{ "body": "Currently the \"multi_match\" query accepts an array of strings as it's \"query\" parameter but actually performs the search using only the last value of the array. Specifying an array here should result in a parsing error instead.\n\nCloses #20785\n", "number": 20884, "review_comments": [], "title": "Stricter parsing of multi_match \"query\" parameter" }
{ "commits": [ { "message": "Stricter parsing of multi_match \"query\" parameter\n\nCurrently the \"multi_match\" query accepts an array of strings as it's \"query\"\nparameter but actually performs the search using only the last value of the\narray. Specifying an array here should result in a parsing error instead.\n\nCloses #20785" } ], "files": [ { "diff": "@@ -137,8 +137,11 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n } else if (\"_name\".equals(currentFieldName)) {\n queryName = parser.text();\n } else {\n- throw new QueryParsingException(parseContext, \"[match] query does not support [\" + currentFieldName + \"]\");\n+ throw new QueryParsingException(parseContext, \"[\" + NAME + \"] query does not support [\" + currentFieldName + \"]\");\n }\n+ } else if (token == XContentParser.Token.START_ARRAY) {\n+ throw new QueryParsingException(parseContext,\n+ \"[\" + NAME + \"] query does not support arrays for the [\" + currentFieldName + \"] parameter\");\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryParser.java", "status": "modified" }, { "diff": "@@ -38,7 +38,6 @@\n import org.junit.Test;\n \n import java.io.IOException;\n-import java.lang.reflect.Field;\n import java.util.ArrayList;\n import java.util.List;\n import java.util.Set;\n@@ -47,9 +46,28 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.*;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n-import static org.hamcrest.Matchers.*;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.disMaxQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.matchPhrasePrefixQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.matchPhraseQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.multiMatchQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertFirstHit;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSecondHit;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasId;\n+import static org.hamcrest.Matchers.anyOf;\n+import static org.hamcrest.Matchers.closeTo;\n+import static org.hamcrest.Matchers.empty;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n+import static org.hamcrest.Matchers.lessThan;\n \n public class MultiMatchQueryIT extends ESIntegTestCase {\n \n@@ -627,6 +645,29 @@ public void testMultiMatchPrefixWithAllField() throws IOException {\n assertFirstHit(searchResponse, hasId(\"theone\"));\n }\n \n+\n+ /**\n+ * test that specifying an array for the \"query\" parameter throws an\n+ * execption (#20785)\n+ */\n+ @Test\n+ public void testErrorOnQueryArray() throws Exception {\n+ try {\n+ client().prepareSearch(\"test\")\n+ .setQuery(\"{\\n\" +\n+ \" \\\"multi_match\\\": {\\n\" +\n+ \" \\\"fields\\\": [\\\"tags\\\", \\\"tag2\\\"],\\n\" +\n+ \" \\\"query\\\": [\\\"broken\\\", \\\"machine\\\"]\\n\" +\n+ \" }\\n\" +\n+ \"}\")\n+ .get();\n+ fail(\"query is invalid and should have produced a parse exception\");\n+ } catch (Exception e) {\n+ assertTrue(\"query could not be parsed due to bad format: \" + e.toString(),\n+ e.toString().contains(\"[multi_match] query does not support arrays for the [query] parameter\"));\n+ }\n+ }\n+\n private static void assertEquivalent(String query, SearchResponse left, SearchResponse right) {\n assertNoFailures(left);\n assertNoFailures(right);", "filename": "core/src/test/java/org/elasticsearch/search/query/MultiMatchQueryIT.java", "status": "modified" } ] }
{ "body": "When one of the items of a multi get request refers to an alias that points to multiple indices, only that specific item should fail (as we don't know where to get the document from). Instead, the whole request fails at the moment.\n", "comments": [], "number": 20845, "title": "MultiGet fails entirely when using alias that points to multiple indices" }
{ "body": "MultiGet should not fail entirely when one of the items of a multi get request refers to an alias that points to multiple indices.\n\ncloses #20845\n", "number": 20858, "review_comments": [ { "body": "I wonder if this should be `Exception`. see also #20659\n", "created_at": "2016-10-11T13:49:15Z" }, { "body": "Sure - I did not see #20659 before creating this pull request :(\n", "created_at": "2016-10-11T13:57:29Z" } ], "title": "MultiGet should not fail entirely if alias resolves to many indices" }
{ "commits": [ { "message": "MultiGet should not fail entirely when alias resolves to multiple indices\n\nMultiGet should not fail entirely when one of the items of a multi get request refers to an alias that points to multiple indices.\n\ncloses #20845" }, { "message": "Update after Luca comment" } ], "files": [ { "diff": "@@ -145,7 +145,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]get[/\\\\]GetRequest.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]get[/\\\\]MultiGetRequest.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]get[/\\\\]TransportGetAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]get[/\\\\]TransportMultiGetAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]get[/\\\\]TransportShardMultiGetAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]index[/\\\\]IndexRequest.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]index[/\\\\]IndexRequestBuilder.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -47,45 +47,56 @@ public class TransportMultiGetAction extends HandledTransportAction<MultiGetRequ\n @Inject\n public TransportMultiGetAction(Settings settings, ThreadPool threadPool, TransportService transportService,\n ClusterService clusterService, TransportShardMultiGetAction shardAction,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, MultiGetAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, MultiGetRequest::new);\n+ ActionFilters actionFilters, IndexNameExpressionResolver resolver) {\n+ super(settings, MultiGetAction.NAME, threadPool, transportService, actionFilters, resolver, MultiGetRequest::new);\n this.clusterService = clusterService;\n this.shardAction = shardAction;\n }\n \n @Override\n protected void doExecute(final MultiGetRequest request, final ActionListener<MultiGetResponse> listener) {\n ClusterState clusterState = clusterService.state();\n-\n clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ);\n \n final AtomicArray<MultiGetItemResponse> responses = new AtomicArray<>(request.items.size());\n+ final Map<ShardId, MultiGetShardRequest> shardRequests = new HashMap<>();\n \n- Map<ShardId, MultiGetShardRequest> shardRequests = new HashMap<>();\n for (int i = 0; i < request.items.size(); i++) {\n MultiGetRequest.Item item = request.items.get(i);\n+\n if (!clusterState.metaData().hasConcreteIndex(item.index())) {\n- responses.set(i, new MultiGetItemResponse(null, new MultiGetResponse.Failure(item.index(), item.type(), item.id(), new IndexNotFoundException(item.index()))));\n+ responses.set(i, newItemFailure(item.index(), item.type(), item.id(), new IndexNotFoundException(item.index())));\n continue;\n }\n- item.routing(clusterState.metaData().resolveIndexRouting(item.parent(), item.routing(), item.index()));\n- String concreteSingleIndex = indexNameExpressionResolver.concreteSingleIndex(clusterState, item).getName();\n- if (item.routing() == null && clusterState.getMetaData().routingRequired(concreteSingleIndex, item.type())) {\n- responses.set(i, new MultiGetItemResponse(null, new MultiGetResponse.Failure(concreteSingleIndex, item.type(), item.id(),\n- new IllegalArgumentException(\"routing is required for [\" + concreteSingleIndex + \"]/[\" + item.type() + \"]/[\" + item.id() + \"]\"))));\n+\n+ String concreteSingleIndex;\n+ try {\n+ item.routing(clusterState.metaData().resolveIndexRouting(item.parent(), item.routing(), item.index()));\n+ concreteSingleIndex = indexNameExpressionResolver.concreteSingleIndex(clusterState, item).getName();\n+\n+ if ((item.routing() == null) && (clusterState.getMetaData().routingRequired(concreteSingleIndex, item.type()))) {\n+ String message = \"routing is required for [\" + concreteSingleIndex + \"]/[\" + item.type() + \"]/[\" + item.id() + \"]\";\n+ responses.set(i, newItemFailure(concreteSingleIndex, item.type(), item.id(), new IllegalArgumentException(message)));\n+ continue;\n+ }\n+ } catch (Exception e) {\n+ responses.set(i, newItemFailure(item.index(), item.type(), item.id(), e));\n continue;\n }\n+\n ShardId shardId = clusterService.operationRouting()\n- .getShards(clusterState, concreteSingleIndex, item.id(), item.routing(), null).shardId();\n+ .getShards(clusterState, concreteSingleIndex, item.id(), item.routing(), null)\n+ .shardId();\n+\n MultiGetShardRequest shardRequest = shardRequests.get(shardId);\n if (shardRequest == null) {\n- shardRequest = new MultiGetShardRequest(request, shardId.getIndexName(), shardId.id());\n+ shardRequest = new MultiGetShardRequest(request, shardId.getIndexName(), shardId.getId());\n shardRequests.put(shardId, shardRequest);\n }\n shardRequest.add(i, item);\n }\n \n- if (shardRequests.size() == 0) {\n+ if (shardRequests.isEmpty()) {\n // only failures..\n listener.onResponse(new MultiGetResponse(responses.toArray(new MultiGetItemResponse[responses.length()])));\n }\n@@ -97,7 +108,8 @@ protected void doExecute(final MultiGetRequest request, final ActionListener<Mul\n @Override\n public void onResponse(MultiGetShardResponse response) {\n for (int i = 0; i < response.locations.size(); i++) {\n- responses.set(response.locations.get(i), new MultiGetItemResponse(response.responses.get(i), response.failures.get(i)));\n+ MultiGetItemResponse itemResponse = new MultiGetItemResponse(response.responses.get(i), response.failures.get(i));\n+ responses.set(response.locations.get(i), itemResponse);\n }\n if (counter.decrementAndGet() == 0) {\n finishHim();\n@@ -109,8 +121,7 @@ public void onFailure(Exception e) {\n // create failures for all relevant requests\n for (int i = 0; i < shardRequest.locations.size(); i++) {\n MultiGetRequest.Item item = shardRequest.items.get(i);\n- responses.set(shardRequest.locations.get(i), new MultiGetItemResponse(null,\n- new MultiGetResponse.Failure(shardRequest.index(), item.type(), item.id(), e)));\n+ responses.set(shardRequest.locations.get(i), newItemFailure(shardRequest.index(), item.type(), item.id(), e));\n }\n if (counter.decrementAndGet() == 0) {\n finishHim();\n@@ -123,4 +134,8 @@ private void finishHim() {\n });\n }\n }\n+\n+ private static MultiGetItemResponse newItemFailure(String index, String type, String id, Exception exception) {\n+ return new MultiGetItemResponse(null, new MultiGetResponse.Failure(index, type, id, exception));\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/action/get/TransportMultiGetAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,42 @@\n+---\n+\"Multi Get with alias that resolves to multiple indices\":\n+\n+ - do:\n+ bulk:\n+ refresh: true\n+ body: |\n+ {\"index\": {\"_index\": \"test_1\", \"_type\": \"test\", \"_id\": 1}}\n+ { \"foo\": \"bar\" }\n+ {\"index\": {\"_index\": \"test_2\", \"_type\": \"test\", \"_id\": 2}}\n+ { \"foo\": \"bar\" }\n+ {\"index\": {\"_index\": \"test_3\", \"_type\": \"test\", \"_id\": 3}}\n+ { \"foo\": \"bar\" }\n+\n+ - do:\n+ indices.put_alias:\n+ index: test_2\n+ name: test_two_and_three\n+\n+ - do:\n+ indices.put_alias:\n+ index: test_3\n+ name: test_two_and_three\n+\n+ - do:\n+ mget:\n+ body:\n+ docs:\n+ - { _index: test_1, _type: test, _id: 1}\n+ - { _index: test_two_and_three, _type: test, _id: 2}\n+\n+ - is_true: docs.0.found\n+ - match: { docs.0._index: test_1 }\n+ - match: { docs.0._type: test }\n+ - match: { docs.0._id: \"1\" }\n+\n+ - is_false: docs.1.found\n+ - match: { docs.1._index: test_two_and_three }\n+ - match: { docs.1._type: test }\n+ - match: { docs.1._id: \"2\" }\n+ - match: { docs.1.error.root_cause.0.type: \"illegal_argument_exception\" }\n+ - match: { docs.1.error.root_cause.0.reason: \"/Alias.\\\\[test_two_and_three\\\\].has.more.than.one.index.associated.with.it.\\\\[\\\\[test_[23]{1},.test_[23]{1}\\\\]\\\\],.can't.execute.a.single.index.op/\" }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/mget/14_alias_to_multiple_indices.yaml", "status": "added" } ] }
{ "body": "After a failed restore, the restore state in the cluster state is not properly cleaned up, blocking future restores from starting. Only a full cluster restart cleans the restore state.\n\nSource: https://discuss.elastic.co/t/cannot-restore-snapshot-process-already-running/56746\nEnvironment: 10 node cluster / elasticsearch 2.3.4 / Debian Jessie / NFS repository type\n\nThe cluster state shows that there are indices (e.g. index-a) which exist in snapshot state but don’t exist in metadata. Then there are indices (e.g. index-b) that are marked as INIT in snapshot state but shard routings are all marked as STARTED.\n\nExcerpts from the anonymized cluster state:\n\n```\n \"restore\" : {\n \"snapshots\" : [ {\n \"snapshot\" : \"backup\",\n \"repository\" : \"backup\",\n \"state\" : \"STARTED\",\n \"indices\" : [ \"index-a\", \"index-b\", ... ],\n \"shards\" : [ {\n \"index\" : \"index-a\",\n \"shard\" : 0,\n \"state\" : \"SUCCESS\"\n }, {\n \"index\" : \"index-a\",\n \"shard\" : 1,\n \"state\" : \"FAILURE\"\n }, ...\n\n ..., {\n \"index\" : \"index-b\",\n \"shard\" : 0,\n \"state\" : \"INIT\"\n }, {\n \"index\" : \"index-b\",\n \"shard\" : 1,\n \"state\" : \"INIT\"\n }, {\n \"index\" : \"index-b\",\n \"shard\" : 2,\n \"state\" : \"SUCCESS\"\n },\n\n```\n\nthere is no indexmetadata/shardroutings for index-a in the cluster state but for index-b:\n\nIndexMetaData:\n\n```\n ...,\n \"index-b\" : {\n \"state\" : \"open\",\n \"settings\" : {\n \"index\" : {\n \"creation_date\" : \"...\",\n ...\n }\n },\n \"mappings\" : {\n ...\n },\n \"aliases\" : ...\n },\n```\n\nand\n\nShardRoutings:\n\n```\n ...,\n \"index-b\" : {\n \"shards\" : {\n \"2\" : [ {\n \"state\" : \"STARTED\",\n \"primary\" : false,\n \"node\" : \"HHYUOZZ7TqeapIamJeYU8w\",\n \"relocating_node\" : null,\n \"shard\" : 2,\n \"index\" : \"index-b\",\n \"version\" : 47,\n \"allocation_id\" : {\n \"id\" : \"2GELW1i0S7i8KPf_moWugg\"\n }\n }, {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"r-kHLyr8Q3SlTt6ToVlq1A\",\n \"relocating_node\" : null,\n \"shard\" : 2,\n \"index\" : \"index-b\",\n \"version\" : 47,\n \"allocation_id\" : {\n \"id\" : \"S_O2zIM_ShG9g1oNCfeVew\"\n }\n }, {\n \"state\" : \"STARTED\",\n \"primary\" : false,\n \"node\" : \"_7WX9LH2TYyHEMykc-1s9w\",\n \"relocating_node\" : null,\n \"shard\" : 2,\n \"index\" : \"index-b\",\n \"version\" : 47,\n \"allocation_id\" : {\n \"id\" : \"-61Vup2mSnOrRKG2GfrY5Q\"\n }\n } ],\n \"1\" : [ {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"vfw5y4jDTh-tiI2Gqc5fyA\",\n \"relocating_node\" : null,\n \"shard\" : 1,\n \"index\" : \"index-b\",\n \"version\" : 45,\n \"allocation_id\" : {\n \"id\" : \"N4mQmap3TfGZWVsD8B7f3Q\"\n }\n }, {\n \"state\" : \"STARTED\",\n \"primary\" : false,\n \"node\" : \"HHYUOZZ7TqeapIamJeYU8w\",\n \"relocating_node\" : null,\n \"shard\" : 1,\n \"index\" : \"index-b\",\n \"version\" : 45,\n \"allocation_id\" : {\n \"id\" : \"6EJtCi5uR8aRzDcow69yQQ\"\n }\n }, {\n \"state\" : \"STARTED\",\n \"primary\" : false,\n \"node\" : \"6IFVYG3ZT7iS0CZR6hhUsw\",\n \"relocating_node\" : null,\n \"shard\" : 1,\n \"index\" : \"index-b\",\n \"version\" : 45,\n \"allocation_id\" : {\n \"id\" : \"ufHda_oUQ6qm5jHSV50Crw\"\n }\n } ],\n \"0\" : [ {\n \"state\" : \"STARTED\",\n \"primary\" : false,\n \"node\" : \"h6XBV034RL-psP0qXW9vmw\",\n \"relocating_node\" : null,\n \"shard\" : 0,\n \"index\" : \"index-b\",\n \"version\" : 60,\n \"allocation_id\" : {\n \"id\" : \"9QFXNtkqQYCkYfJ_HJ7mbA\"\n }\n }, {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"lm1ClSoQT561rp7xvgmFbg\",\n \"relocating_node\" : null,\n \"shard\" : 0,\n \"index\" : \"index-b\",\n \"version\" : 60,\n \"allocation_id\" : {\n \"id\" : \"YrCTXmZ6RVm07a7d5UYPVg\"\n }\n }, {\n \"state\" : \"STARTED\",\n \"primary\" : false,\n \"node\" : \"xmuJY7BYRO6893-XTpTnTQ\",\n \"relocating_node\" : null,\n \"shard\" : 0,\n \"index\" : \"index-b\",\n \"version\" : 60,\n \"allocation_id\" : {\n \"id\" : \"x2u628VdSHS3p8vl9e7ufA\"\n }\n } ]\n }\n }, ...\n```\n", "comments": [ { "body": "@ywelsch are there any messages in the log file? Have you seen anything about deleted indices, restarted nodes, changed masters?\n", "created_at": "2016-08-03T12:03:51Z" }, { "body": "From the discuss post, seems like `index-a` was deleted (which explains the missing indexmetadata/shardroutings).\n\nTo me the only open question is what's wrong with `index-b`.\n\nI think this can be caused when the message sent by indexShardRestoreCompleted fails to reach the master (or reaches a master that has stepped down). This means that the shard has been started, but restore shard state is still INIT (what we see here).\n", "created_at": "2016-08-03T12:19:37Z" } ], "number": 19774, "title": "Restoring a snapshot can leave cluster state in broken state" }
{ "body": "The snapshot restore state tracks information about shards being restored from a snapshot in the cluster state. For example it records if a shard has been successfully restored or if restoring it was not possible due to a corruption of the snapshot. Recording these events is usually based on changes to the shard routing table, i.e., when a shard is started after a successful restore or failed after an unsuccessful one. As of now, there were two communication channels to transmit recovery failure / success to update the routing table and the restore state. This lead to issues where a shard was failed but the restore state was not updated due to connection issues between data and master node. In some rare situations, this lead to an issue where the restore state could not be properly cleaned up anymore by the master, making it impossible to start new restore operations. The following change updates routing table and restore state in the same cluster state update so that both always stay in sync. It also eliminates the extra communication channel for restore operations and uses the standard cluster state listener mechanism to update restore listener upon successful completion of a snapshot restore.\n\nCloses #19774\n", "number": 20836, "review_comments": [ { "body": "That doesn't seem to cause any issues, but I think moving this into the if statement bellow might help to clarify the logic. This method can be called when no restore takes place and we kind of plunge head on into updating restore info without even checking if restore actually takes place. We check it inside updateRestoreInfoWithRoutingChanges, but I think it might make the logic clearer if we checked it here. \n", "created_at": "2016-10-11T00:19:12Z" }, { "body": "Could you add a comment explaining why we only fail in case of lucene corruption?\n", "created_at": "2016-10-11T15:01:06Z" }, { "body": "agree\n", "created_at": "2016-10-11T16:25:55Z" }, { "body": "sure\n", "created_at": "2016-10-11T16:26:02Z" } ], "title": "Keep snapshot restore state and routing table in sync" }
{ "commits": [ { "message": "Update snapshot restore state and routing table in sync\n\nThe snapshot restore state tracks information about shards being restored from a snapshot in the cluster state. For\nexample it records if a shard has been successfully restored or if restoring it was not possible due to a corruption of\nthe snapshot. Recording these events is usually based on changes to the shard routing table, i.e., when a shard is\nstarted after a successful restore or failed after an unsuccessful one. As of now, there were two communication channels\nto transmit recovery failure / success to update the routing table and the restore state. This lead to issues where a\nshard was failed but the restore state was not updated due to connection issues between data and master node. In some\nrare situations, this lead to an issue where the restore state could not be properly cleaned up anymore by the master,\nmaking it impossible to start new restore operations. The following change updates routing table and restore state in\nthe same cluster state update so that both always stay in sync. It also eliminates the extra communication channel for\nrestore operations and uses standard cluster state listener mechanism to update restore listener upon successful\ncompletion of a snapshot." }, { "message": "address review comments" }, { "message": "Fix NPE" } ], "files": [ { "diff": "@@ -22,19 +22,27 @@\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.master.TransportMasterNodeAction;\n+import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.ClusterStateListener;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.block.ClusterBlockException;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.snapshots.RestoreInfo;\n import org.elasticsearch.snapshots.RestoreService;\n+import org.elasticsearch.snapshots.RestoreService.RestoreCompletionResponse;\n import org.elasticsearch.snapshots.Snapshot;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n+import static org.elasticsearch.snapshots.RestoreService.restoreInProgress;\n+\n /**\n * Transport action for restore snapshot operation\n */\n@@ -78,28 +86,44 @@ protected void masterOperation(final RestoreSnapshotRequest request, final Clust\n request.settings(), request.masterNodeTimeout(), request.includeGlobalState(), request.partial(), request.includeAliases(),\n request.indexSettings(), request.ignoreIndexSettings(), \"restore_snapshot[\" + request.snapshot() + \"]\");\n \n- restoreService.restoreSnapshot(restoreRequest, new ActionListener<RestoreInfo>() {\n+ restoreService.restoreSnapshot(restoreRequest, new ActionListener<RestoreCompletionResponse>() {\n @Override\n- public void onResponse(RestoreInfo restoreInfo) {\n- if (restoreInfo == null && request.waitForCompletion()) {\n- restoreService.addListener(new ActionListener<RestoreService.RestoreCompletionResponse>() {\n+ public void onResponse(RestoreCompletionResponse restoreCompletionResponse) {\n+ if (restoreCompletionResponse.getRestoreInfo() == null && request.waitForCompletion()) {\n+ final Snapshot snapshot = restoreCompletionResponse.getSnapshot();\n+\n+ ClusterStateListener clusterStateListener = new ClusterStateListener() {\n @Override\n- public void onResponse(RestoreService.RestoreCompletionResponse restoreCompletionResponse) {\n- final Snapshot snapshot = restoreCompletionResponse.getSnapshot();\n- if (snapshot.getRepository().equals(request.repository()) &&\n- snapshot.getSnapshotId().getName().equals(request.snapshot())) {\n- listener.onResponse(new RestoreSnapshotResponse(restoreCompletionResponse.getRestoreInfo()));\n- restoreService.removeListener(this);\n+ public void clusterChanged(ClusterChangedEvent changedEvent) {\n+ final RestoreInProgress.Entry prevEntry = restoreInProgress(changedEvent.previousState(), snapshot);\n+ final RestoreInProgress.Entry newEntry = restoreInProgress(changedEvent.state(), snapshot);\n+ if (prevEntry == null) {\n+ // When there is a master failure after a restore has been started, this listener might not be registered\n+ // on the current master and as such it might miss some intermediary cluster states due to batching.\n+ // Clean up listener in that case and acknowledge completion of restore operation to client.\n+ clusterService.remove(this);\n+ listener.onResponse(new RestoreSnapshotResponse(null));\n+ } else if (newEntry == null) {\n+ clusterService.remove(this);\n+ ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards = prevEntry.shards();\n+ assert prevEntry.state().completed() : \"expected completed snapshot state but was \" + prevEntry.state();\n+ assert RestoreService.completed(shards) : \"expected all restore entries to be completed\";\n+ RestoreInfo ri = new RestoreInfo(prevEntry.snapshot().getSnapshotId().getName(),\n+ prevEntry.indices(),\n+ shards.size(),\n+ shards.size() - RestoreService.failedShards(shards));\n+ RestoreSnapshotResponse response = new RestoreSnapshotResponse(ri);\n+ logger.debug(\"restore of [{}] completed\", snapshot);\n+ listener.onResponse(response);\n+ } else {\n+ // restore not completed yet, wait for next cluster state update\n }\n }\n+ };\n \n- @Override\n- public void onFailure(Exception e) {\n- listener.onFailure(e);\n- }\n- });\n+ clusterService.addLast(clusterStateListener);\n } else {\n- listener.onResponse(new RestoreSnapshotResponse(restoreInfo));\n+ listener.onResponse(new RestoreSnapshotResponse(restoreCompletionResponse.getRestoreInfo()));\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java", "status": "modified" }, { "diff": "@@ -23,20 +23,22 @@\n import org.elasticsearch.action.admin.indices.delete.DeleteIndexClusterStateUpdateRequest;\n import org.elasticsearch.cluster.AckedClusterStateUpdateTask;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.index.Index;\n+import org.elasticsearch.snapshots.RestoreService;\n import org.elasticsearch.snapshots.SnapshotsService;\n \n-import java.util.Arrays;\n-import java.util.Collection;\n import java.util.Set;\n \n import static java.util.stream.Collectors.toSet;\n@@ -73,15 +75,15 @@ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n \n @Override\n public ClusterState execute(final ClusterState currentState) {\n- return deleteIndices(currentState, Arrays.asList(request.indices()));\n+ return deleteIndices(currentState, Sets.newHashSet(request.indices()));\n }\n });\n }\n \n /**\n * Delete some indices from the cluster state.\n */\n- public ClusterState deleteIndices(ClusterState currentState, Collection<Index> indices) {\n+ public ClusterState deleteIndices(ClusterState currentState, Set<Index> indices) {\n final MetaData meta = currentState.metaData();\n final Set<IndexMetaData> metaDatas = indices.stream().map(i -> meta.getIndexSafe(i)).collect(toSet());\n // Check if index deletion conflicts with any running snapshots\n@@ -107,11 +109,25 @@ public ClusterState deleteIndices(ClusterState currentState, Collection<Index> i\n \n MetaData newMetaData = metaDataBuilder.build();\n ClusterBlocks blocks = clusterBlocksBuilder.build();\n+\n+ // update snapshot restore entries\n+ ImmutableOpenMap<String, ClusterState.Custom> customs = currentState.getCustoms();\n+ final RestoreInProgress restoreInProgress = currentState.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null) {\n+ RestoreInProgress updatedRestoreInProgress = RestoreService.updateRestoreStateWithDeletedIndices(restoreInProgress, indices);\n+ if (updatedRestoreInProgress != restoreInProgress) {\n+ ImmutableOpenMap.Builder<String, ClusterState.Custom> builder = ImmutableOpenMap.builder(customs);\n+ builder.put(RestoreInProgress.TYPE, updatedRestoreInProgress);\n+ customs = builder.build();\n+ }\n+ }\n+\n return allocationService.reroute(\n ClusterState.builder(currentState)\n .routingTable(routingTableBuilder.build())\n .metaData(newMetaData)\n .blocks(blocks)\n+ .customs(customs)\n .build(),\n \"deleted indices [\" + indices + \"]\");\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataDeleteIndexService.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.cluster.ClusterInfoService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.cluster.health.ClusterStateHealth;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -34,6 +35,7 @@\n import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocator;\n import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands;\n import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -94,17 +96,24 @@ public ClusterState applyStartedShards(ClusterState clusterState, List<ShardRout\n }\n \n protected ClusterState buildResultAndLogHealthChange(ClusterState oldState, RoutingAllocation allocation, String reason) {\n- return buildResultAndLogHealthChange(oldState, allocation, reason, new RoutingExplanations());\n- }\n-\n- protected ClusterState buildResultAndLogHealthChange(ClusterState oldState, RoutingAllocation allocation, String reason,\n- RoutingExplanations explanations) {\n RoutingTable oldRoutingTable = oldState.routingTable();\n RoutingNodes newRoutingNodes = allocation.routingNodes();\n final RoutingTable newRoutingTable = new RoutingTable.Builder().updateNodes(oldRoutingTable.version(), newRoutingNodes).build();\n MetaData newMetaData = allocation.updateMetaDataWithRoutingChanges(newRoutingTable);\n assert newRoutingTable.validate(newMetaData); // validates the routing table is coherent with the cluster state metadata\n- final ClusterState newState = ClusterState.builder(oldState).routingTable(newRoutingTable).metaData(newMetaData).build();\n+ final ClusterState.Builder newStateBuilder = ClusterState.builder(oldState)\n+ .routingTable(newRoutingTable)\n+ .metaData(newMetaData);\n+ final RestoreInProgress restoreInProgress = allocation.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null) {\n+ RestoreInProgress updatedRestoreInProgress = allocation.updateRestoreInfoWithRoutingChanges(restoreInProgress);\n+ if (updatedRestoreInProgress != restoreInProgress) {\n+ ImmutableOpenMap.Builder<String, ClusterState.Custom> customsBuilder = ImmutableOpenMap.builder(allocation.getCustoms());\n+ customsBuilder.put(RestoreInProgress.TYPE, updatedRestoreInProgress);\n+ newStateBuilder.customs(customsBuilder.build());\n+ }\n+ }\n+ final ClusterState newState = newStateBuilder.build();\n logClusterHealthStateChange(\n new ClusterStateHealth(oldState),\n new ClusterStateHealth(newState),", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.cluster.ClusterInfo;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.RoutingChangesObserver;\n@@ -30,6 +31,8 @@\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.snapshots.RestoreService;\n+import org.elasticsearch.snapshots.RestoreService.RestoreInProgressUpdater;\n \n import java.util.HashMap;\n import java.util.HashSet;\n@@ -76,8 +79,9 @@ public class RoutingAllocation {\n \n private final IndexMetaDataUpdater indexMetaDataUpdater = new IndexMetaDataUpdater();\n private final RoutingNodesChangedObserver nodesChangedObserver = new RoutingNodesChangedObserver();\n+ private final RestoreInProgressUpdater restoreInProgressUpdater = new RestoreInProgressUpdater();\n private final RoutingChangesObserver routingChangesObserver = new RoutingChangesObserver.DelegatingRoutingChangesObserver(\n- nodesChangedObserver, indexMetaDataUpdater\n+ nodesChangedObserver, indexMetaDataUpdater, restoreInProgressUpdater\n );\n \n \n@@ -154,6 +158,10 @@ public <T extends ClusterState.Custom> T custom(String key) {\n return (T)customs.get(key);\n }\n \n+ public ImmutableOpenMap<String, ClusterState.Custom> getCustoms() {\n+ return customs;\n+ }\n+\n /**\n * Get explanations of current routing\n * @return explanation of routing\n@@ -234,6 +242,13 @@ public MetaData updateMetaDataWithRoutingChanges(RoutingTable newRoutingTable) {\n return indexMetaDataUpdater.applyChanges(metaData, newRoutingTable);\n }\n \n+ /**\n+ * Returns updated {@link RestoreInProgress} based on the changes that were made to the routing nodes\n+ */\n+ public RestoreInProgress updateRestoreInfoWithRoutingChanges(RestoreInProgress restoreInProgress) {\n+ return restoreInProgressUpdater.applyChanges(restoreInProgress);\n+ }\n+\n /**\n * Returns true iff changes were made to the routing nodes\n */", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java", "status": "modified" }, { "diff": "@@ -570,8 +570,7 @@ private void updateShard(DiscoveryNodes nodes, ShardRouting shardRouting, Shard\n \n /**\n * Finds the routing source node for peer recovery, return null if its not found. Note, this method expects the shard\n- * routing to *require* peer recovery, use {@link ShardRouting#recoverySource()} to\n- * check if its needed or not.\n+ * routing to *require* peer recovery, use {@link ShardRouting#recoverySource()} to check if its needed or not.\n */\n private static DiscoveryNode findSourceNodeForPeerRecovery(Logger logger, RoutingTable routingTable, DiscoveryNodes nodes,\n ShardRouting shardRouting) {\n@@ -610,29 +609,12 @@ private RecoveryListener(ShardRouting shardRouting) {\n \n @Override\n public void onRecoveryDone(RecoveryState state) {\n- if (state.getRecoverySource().getType() == Type.SNAPSHOT) {\n- SnapshotRecoverySource snapshotRecoverySource = (SnapshotRecoverySource) state.getRecoverySource();\n- restoreService.indexShardRestoreCompleted(snapshotRecoverySource.snapshot(), shardRouting.shardId());\n- }\n shardStateAction.shardStarted(shardRouting, \"after \" + state.getRecoverySource(), SHARD_STATE_ACTION_LISTENER);\n }\n \n @Override\n public void onRecoveryFailure(RecoveryState state, RecoveryFailedException e, boolean sendShardFailure) {\n- if (state.getRecoverySource().getType() == Type.SNAPSHOT) {\n- try {\n- if (Lucene.isCorruptionException(e.getCause())) {\n- SnapshotRecoverySource snapshotRecoverySource = (SnapshotRecoverySource) state.getRecoverySource();\n- restoreService.failRestore(snapshotRecoverySource.snapshot(), shardRouting.shardId());\n- }\n- } catch (Exception inner) {\n- e.addSuppressed(inner);\n- } finally {\n- handleRecoveryFailure(shardRouting, sendShardFailure, e);\n- }\n- } else {\n- handleRecoveryFailure(shardRouting, sendShardFailure, e);\n- }\n+ handleRecoveryFailure(shardRouting, sendShardFailure, e);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.hppc.IntSet;\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n+import org.apache.logging.log4j.Logger;\n import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.apache.logging.log4j.util.Supplier;\n import org.elasticsearch.Version;\n@@ -30,6 +31,9 @@\n import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ClusterStateListener;\n+import org.elasticsearch.cluster.ClusterStateTaskConfig;\n+import org.elasticsearch.cluster.ClusterStateTaskExecutor;\n+import org.elasticsearch.cluster.ClusterStateTaskListener;\n import org.elasticsearch.cluster.ClusterStateUpdateTask;\n import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.RestoreInProgress.ShardRestoreStatus;\n@@ -41,54 +45,43 @@\n import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService;\n import org.elasticsearch.cluster.metadata.RepositoriesMetaData;\n-import org.elasticsearch.cluster.routing.IndexRoutingTable;\n-import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n+import org.elasticsearch.cluster.routing.RecoverySource;\n import org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource;\n+import org.elasticsearch.cluster.routing.RoutingChangesObserver;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.service.ClusterService;\n-import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n-import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.ClusterSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.repositories.RepositoriesService;\n import org.elasticsearch.repositories.Repository;\n import org.elasticsearch.repositories.RepositoryData;\n-import org.elasticsearch.threadpool.ThreadPool;\n-import org.elasticsearch.transport.EmptyTransportResponseHandler;\n-import org.elasticsearch.transport.TransportChannel;\n-import org.elasticsearch.transport.TransportRequest;\n-import org.elasticsearch.transport.TransportRequestHandler;\n-import org.elasticsearch.transport.TransportResponse;\n-import org.elasticsearch.transport.TransportService;\n-\n-import java.io.IOException;\n+\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n-import java.util.Map.Entry;\n import java.util.Objects;\n import java.util.Optional;\n import java.util.Set;\n-import java.util.concurrent.BlockingQueue;\n-import java.util.concurrent.CopyOnWriteArrayList;\n+import java.util.stream.Collectors;\n \n import static java.util.Collections.unmodifiableSet;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS;\n@@ -117,14 +110,12 @@\n * method, which detects that shard should be restored from snapshot rather than recovered from gateway by looking\n * at the {@link ShardRouting#recoverySource()} property.\n * <p>\n- * At the end of the successful restore process {@code IndexShardSnapshotAndRestoreService} calls {@link #indexShardRestoreCompleted(Snapshot, ShardId)},\n- * which updates {@link RestoreInProgress} in cluster state or removes it when all shards are completed. In case of\n+ * At the end of the successful restore process {@code RestoreService} calls {@link #cleanupRestoreState(ClusterChangedEvent)},\n+ * which removes {@link RestoreInProgress} when all shards are completed. In case of\n * restore failure a normal recovery fail-over process kicks in.\n */\n public class RestoreService extends AbstractComponent implements ClusterStateListener {\n \n- public static final String UPDATE_RESTORE_ACTION_NAME = \"internal:cluster/snapshot/update_restore\";\n-\n private static final Set<String> UNMODIFIABLE_SETTINGS = unmodifiableSet(newHashSet(\n SETTING_NUMBER_OF_SHARDS,\n SETTING_VERSION_CREATED,\n@@ -148,33 +139,29 @@ public class RestoreService extends AbstractComponent implements ClusterStateLis\n \n private final RepositoriesService repositoriesService;\n \n- private final TransportService transportService;\n-\n private final AllocationService allocationService;\n \n private final MetaDataCreateIndexService createIndexService;\n \n private final MetaDataIndexUpgradeService metaDataIndexUpgradeService;\n \n- private final CopyOnWriteArrayList<ActionListener<RestoreCompletionResponse>> listeners = new CopyOnWriteArrayList<>();\n-\n- private final BlockingQueue<UpdateIndexShardRestoreStatusRequest> updatedSnapshotStateQueue = ConcurrentCollections.newBlockingQueue();\n private final ClusterSettings clusterSettings;\n \n+ private final CleanRestoreStateTaskExecutor cleanRestoreStateTaskExecutor;\n+\n @Inject\n- public RestoreService(Settings settings, ClusterService clusterService, RepositoriesService repositoriesService, TransportService transportService,\n+ public RestoreService(Settings settings, ClusterService clusterService, RepositoriesService repositoriesService,\n AllocationService allocationService, MetaDataCreateIndexService createIndexService,\n MetaDataIndexUpgradeService metaDataIndexUpgradeService, ClusterSettings clusterSettings) {\n super(settings);\n this.clusterService = clusterService;\n this.repositoriesService = repositoriesService;\n- this.transportService = transportService;\n this.allocationService = allocationService;\n this.createIndexService = createIndexService;\n this.metaDataIndexUpgradeService = metaDataIndexUpgradeService;\n- transportService.registerRequestHandler(UPDATE_RESTORE_ACTION_NAME, UpdateIndexShardRestoreStatusRequest::new, ThreadPool.Names.SAME, new UpdateRestoreStateRequestHandler());\n clusterService.add(this);\n this.clusterSettings = clusterSettings;\n+ this.cleanRestoreStateTaskExecutor = new CleanRestoreStateTaskExecutor(logger);\n }\n \n /**\n@@ -183,7 +170,7 @@ public RestoreService(Settings settings, ClusterService clusterService, Reposito\n * @param request restore request\n * @param listener restore listener\n */\n- public void restoreSnapshot(final RestoreRequest request, final ActionListener<RestoreInfo> listener) {\n+ public void restoreSnapshot(final RestoreRequest request, final ActionListener<RestoreCompletionResponse> listener) {\n try {\n // Read snapshot info and metadata from the repository\n Repository repository = repositoriesService.repository(request.repositoryName);\n@@ -314,7 +301,7 @@ public ClusterState execute(ClusterState currentState) {\n }\n \n shards = shardsBuilder.build();\n- RestoreInProgress.Entry restoreEntry = new RestoreInProgress.Entry(snapshot, RestoreInProgress.State.INIT, Collections.unmodifiableList(new ArrayList<>(renamedIndices.keySet())), shards);\n+ RestoreInProgress.Entry restoreEntry = new RestoreInProgress.Entry(snapshot, overallState(RestoreInProgress.State.INIT, shards), Collections.unmodifiableList(new ArrayList<>(renamedIndices.keySet())), shards);\n builder.putCustom(RestoreInProgress.TYPE, new RestoreInProgress(restoreEntry));\n } else {\n shards = ImmutableOpenMap.of();\n@@ -469,7 +456,7 @@ public TimeValue timeout() {\n \n @Override\n public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n- listener.onResponse(restoreInfo);\n+ listener.onResponse(new RestoreCompletionResponse(snapshot, restoreInfo));\n }\n });\n \n@@ -480,19 +467,33 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n }\n }\n \n- /**\n- * This method is used by {@link IndexShard} to notify\n- * {@code RestoreService} about shard restore completion.\n- *\n- * @param snapshot snapshot\n- * @param shardId shard id\n- */\n- public void indexShardRestoreCompleted(Snapshot snapshot, ShardId shardId) {\n- logger.trace(\"[{}] successfully restored shard [{}]\", snapshot, shardId);\n- UpdateIndexShardRestoreStatusRequest request = new UpdateIndexShardRestoreStatusRequest(snapshot, shardId,\n- new ShardRestoreStatus(clusterService.state().nodes().getLocalNodeId(), RestoreInProgress.State.SUCCESS));\n- transportService.sendRequest(clusterService.state().nodes().getMasterNode(),\n- UPDATE_RESTORE_ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);\n+ public static RestoreInProgress updateRestoreStateWithDeletedIndices(RestoreInProgress oldRestore, Set<Index> deletedIndices) {\n+ boolean changesMade = false;\n+ final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n+ for (RestoreInProgress.Entry entry : oldRestore.entries()) {\n+ ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = null;\n+ for (ObjectObjectCursor<ShardId, ShardRestoreStatus> cursor : entry.shards()) {\n+ ShardId shardId = cursor.key;\n+ if (deletedIndices.contains(shardId.getIndex())) {\n+ changesMade = true;\n+ if (shardsBuilder == null) {\n+ shardsBuilder = ImmutableOpenMap.builder(entry.shards());\n+ }\n+ shardsBuilder.put(shardId, new ShardRestoreStatus(null, RestoreInProgress.State.FAILURE, \"index was deleted\"));\n+ }\n+ }\n+ if (shardsBuilder != null) {\n+ ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = shardsBuilder.build();\n+ entries.add(new RestoreInProgress.Entry(entry.snapshot(), overallState(RestoreInProgress.State.STARTED, shards), entry.indices(), shards));\n+ } else {\n+ entries.add(entry);\n+ }\n+ }\n+ if (changesMade) {\n+ return new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n+ } else {\n+ return oldRestore;\n+ }\n }\n \n public static final class RestoreCompletionResponse {\n@@ -513,168 +514,201 @@ public RestoreInfo getRestoreInfo() {\n }\n }\n \n- /**\n- * Updates shard restore record in the cluster state.\n- *\n- * @param request update shard status request\n- */\n- private void updateRestoreStateOnMaster(final UpdateIndexShardRestoreStatusRequest request) {\n- logger.trace(\"received updated snapshot restore state [{}]\", request);\n- updatedSnapshotStateQueue.add(request);\n+ public static class RestoreInProgressUpdater extends RoutingChangesObserver.AbstractRoutingChangesObserver {\n+ private final Map<Snapshot, Updates> shardChanges = new HashMap<>();\n \n- clusterService.submitStateUpdateTask(\"update snapshot state\", new ClusterStateUpdateTask() {\n- private final List<UpdateIndexShardRestoreStatusRequest> drainedRequests = new ArrayList<>();\n- private Map<Snapshot, Tuple<RestoreInfo, ImmutableOpenMap<ShardId, ShardRestoreStatus>>> batchedRestoreInfo = null;\n-\n- @Override\n- public ClusterState execute(ClusterState currentState) {\n+ @Override\n+ public void shardStarted(ShardRouting initializingShard, ShardRouting startedShard) {\n+ // mark snapshot as completed\n+ if (initializingShard.primary()) {\n+ RecoverySource recoverySource = initializingShard.recoverySource();\n+ if (recoverySource.getType() == RecoverySource.Type.SNAPSHOT) {\n+ Snapshot snapshot = ((SnapshotRecoverySource) recoverySource).snapshot();\n+ changes(snapshot).startedShards.put(initializingShard.shardId(),\n+ new ShardRestoreStatus(initializingShard.currentNodeId(), RestoreInProgress.State.SUCCESS));\n+ }\n+ }\n+ }\n \n- if (request.processed) {\n- return currentState;\n+ @Override\n+ public void shardFailed(ShardRouting failedShard, UnassignedInfo unassignedInfo) {\n+ if (failedShard.primary() && failedShard.initializing()) {\n+ RecoverySource recoverySource = failedShard.recoverySource();\n+ if (recoverySource.getType() == RecoverySource.Type.SNAPSHOT) {\n+ Snapshot snapshot = ((SnapshotRecoverySource) recoverySource).snapshot();\n+ // mark restore entry for this shard as failed when it's due to a file corruption. There is no need wait on retries\n+ // to restore this shard on another node if the snapshot files are corrupt. In case where a node just left or crashed,\n+ // however, we only want to acknowledge the restore operation once it has been successfully restored on another node.\n+ if (unassignedInfo.getFailure() != null && Lucene.isCorruptionException(unassignedInfo.getFailure().getCause())) {\n+ changes(snapshot).failedShards.put(failedShard.shardId(), new ShardRestoreStatus(failedShard.currentNodeId(),\n+ RestoreInProgress.State.FAILURE, unassignedInfo.getFailure().getCause().getMessage()));\n+ }\n }\n+ }\n+ }\n \n- updatedSnapshotStateQueue.drainTo(drainedRequests);\n+ @Override\n+ public void shardInitialized(ShardRouting unassignedShard, ShardRouting initializedShard) {\n+ // if we force an empty primary, we should also fail the restore entry\n+ if (unassignedShard.recoverySource().getType() == RecoverySource.Type.SNAPSHOT &&\n+ initializedShard.recoverySource().getType() != RecoverySource.Type.SNAPSHOT) {\n+ Snapshot snapshot = ((SnapshotRecoverySource) unassignedShard.recoverySource()).snapshot();\n+ changes(snapshot).failedShards.put(unassignedShard.shardId(), new ShardRestoreStatus(null,\n+ RestoreInProgress.State.FAILURE, \"recovery source type changed from snapshot to \" + initializedShard.recoverySource()));\n+ }\n+ }\n \n- final int batchSize = drainedRequests.size();\n+ /**\n+ * Helper method that creates update entry for the given shard id if such an entry does not exist yet.\n+ */\n+ private Updates changes(Snapshot snapshot) {\n+ return shardChanges.computeIfAbsent(snapshot, k -> new Updates());\n+ }\n \n- // nothing to process (a previous event has processed it already)\n- if (batchSize == 0) {\n- return currentState;\n- }\n+ private static class Updates {\n+ private Map<ShardId, ShardRestoreStatus> failedShards = new HashMap<>();\n+ private Map<ShardId, ShardRestoreStatus> startedShards = new HashMap<>();\n+ }\n \n- final RestoreInProgress restore = currentState.custom(RestoreInProgress.TYPE);\n- if (restore != null) {\n- int changedCount = 0;\n- final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n- for (RestoreInProgress.Entry entry : restore.entries()) {\n- ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = null;\n-\n- for (int i = 0; i < batchSize; i++) {\n- final UpdateIndexShardRestoreStatusRequest updateSnapshotState = drainedRequests.get(i);\n- updateSnapshotState.processed = true;\n-\n- if (entry.snapshot().equals(updateSnapshotState.snapshot())) {\n- logger.trace(\"[{}] Updating shard [{}] with status [{}]\", updateSnapshotState.snapshot(), updateSnapshotState.shardId(), updateSnapshotState.status().state());\n- if (shardsBuilder == null) {\n- shardsBuilder = ImmutableOpenMap.builder(entry.shards());\n- }\n- shardsBuilder.put(updateSnapshotState.shardId(), updateSnapshotState.status());\n- changedCount++;\n- }\n+ public RestoreInProgress applyChanges(RestoreInProgress oldRestore) {\n+ if (shardChanges.isEmpty() == false) {\n+ final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n+ for (RestoreInProgress.Entry entry : oldRestore.entries()) {\n+ Snapshot snapshot = entry.snapshot();\n+ Updates updates = shardChanges.get(snapshot);\n+ assert Sets.haveEmptyIntersection(updates.startedShards.keySet(), updates.failedShards.keySet());\n+ if (updates.startedShards.isEmpty() == false || updates.failedShards.isEmpty() == false) {\n+ ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = ImmutableOpenMap.builder(entry.shards());\n+ for (Map.Entry<ShardId, ShardRestoreStatus> startedShardEntry : updates.startedShards.entrySet()) {\n+ shardsBuilder.put(startedShardEntry.getKey(), startedShardEntry.getValue());\n }\n-\n- if (shardsBuilder != null) {\n- ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = shardsBuilder.build();\n- if (!completed(shards)) {\n- entries.add(new RestoreInProgress.Entry(entry.snapshot(), RestoreInProgress.State.STARTED, entry.indices(), shards));\n- } else {\n- logger.info(\"restore [{}] is done\", entry.snapshot());\n- if (batchedRestoreInfo == null) {\n- batchedRestoreInfo = new HashMap<>();\n- }\n- assert !batchedRestoreInfo.containsKey(entry.snapshot());\n- batchedRestoreInfo.put(entry.snapshot(),\n- new Tuple<>(\n- new RestoreInfo(entry.snapshot().getSnapshotId().getName(),\n- entry.indices(),\n- shards.size(),\n- shards.size() - failedShards(shards)),\n- shards));\n- }\n- } else {\n- entries.add(entry);\n+ for (Map.Entry<ShardId, ShardRestoreStatus> failedShardEntry : updates.failedShards.entrySet()) {\n+ shardsBuilder.put(failedShardEntry.getKey(), failedShardEntry.getValue());\n }\n+ ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = shardsBuilder.build();\n+ RestoreInProgress.State newState = overallState(RestoreInProgress.State.STARTED, shards);\n+ entries.add(new RestoreInProgress.Entry(entry.snapshot(), newState, entry.indices(), shards));\n+ } else {\n+ entries.add(entry);\n }\n+ }\n+ return new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n+ } else {\n+ return oldRestore;\n+ }\n+ }\n \n- if (changedCount > 0) {\n- logger.trace(\"changed cluster state triggered by {} snapshot restore state updates\", changedCount);\n+ }\n \n- final RestoreInProgress updatedRestore = new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n- return ClusterState.builder(currentState).putCustom(RestoreInProgress.TYPE, updatedRestore).build();\n- }\n+ public static RestoreInProgress.Entry restoreInProgress(ClusterState state, Snapshot snapshot) {\n+ final RestoreInProgress restoreInProgress = state.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null) {\n+ for (RestoreInProgress.Entry e : restoreInProgress.entries()) {\n+ if (e.snapshot().equals(snapshot)) {\n+ return e;\n }\n- return currentState;\n }\n+ }\n+ return null;\n+ }\n \n- @Override\n- public void onFailure(String source, @Nullable Exception e) {\n- for (UpdateIndexShardRestoreStatusRequest request : drainedRequests) {\n- logger.warn((Supplier<?>) () -> new ParameterizedMessage(\"[{}][{}] failed to update snapshot status to [{}]\", request.snapshot(), request.shardId(), request.status()), e);\n- }\n+ static class CleanRestoreStateTaskExecutor implements ClusterStateTaskExecutor<CleanRestoreStateTaskExecutor.Task>, ClusterStateTaskListener {\n+\n+ static class Task {\n+ final Snapshot snapshot;\n+\n+ Task(Snapshot snapshot) {\n+ this.snapshot = snapshot;\n }\n \n @Override\n- public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n- if (batchedRestoreInfo != null) {\n- for (final Entry<Snapshot, Tuple<RestoreInfo, ImmutableOpenMap<ShardId, ShardRestoreStatus>>> entry : batchedRestoreInfo.entrySet()) {\n- final Snapshot snapshot = entry.getKey();\n- final RestoreInfo restoreInfo = entry.getValue().v1();\n- final ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = entry.getValue().v2();\n- RoutingTable routingTable = newState.getRoutingTable();\n- final List<ShardId> waitForStarted = new ArrayList<>();\n- for (ObjectObjectCursor<ShardId, ShardRestoreStatus> shard : shards) {\n- if (shard.value.state() == RestoreInProgress.State.SUCCESS ) {\n- ShardId shardId = shard.key;\n- ShardRouting shardRouting = findPrimaryShard(routingTable, shardId);\n- if (shardRouting != null && !shardRouting.active()) {\n- logger.trace(\"[{}][{}] waiting for the shard to start\", snapshot, shardId);\n- waitForStarted.add(shardId);\n- }\n- }\n- }\n- if (waitForStarted.isEmpty()) {\n- notifyListeners(snapshot, restoreInfo);\n- } else {\n- clusterService.addLast(new ClusterStateListener() {\n- @Override\n- public void clusterChanged(ClusterChangedEvent event) {\n- if (event.routingTableChanged()) {\n- RoutingTable routingTable = event.state().getRoutingTable();\n- for (Iterator<ShardId> iterator = waitForStarted.iterator(); iterator.hasNext();) {\n- ShardId shardId = iterator.next();\n- ShardRouting shardRouting = findPrimaryShard(routingTable, shardId);\n- // Shard disappeared (index deleted) or became active\n- if (shardRouting == null || shardRouting.active()) {\n- iterator.remove();\n- logger.trace(\"[{}][{}] shard disappeared or started - removing\", snapshot, shardId);\n- }\n- }\n- }\n- if (waitForStarted.isEmpty()) {\n- notifyListeners(snapshot, restoreInfo);\n- clusterService.remove(this);\n- }\n- }\n- });\n- }\n- }\n- }\n+ public String toString() {\n+ return \"clean restore state for restoring snapshot \" + snapshot;\n }\n+ }\n+\n+ private final Logger logger;\n+\n+ public CleanRestoreStateTaskExecutor(Logger logger) {\n+ this.logger = logger;\n+ }\n \n- private ShardRouting findPrimaryShard(RoutingTable routingTable, ShardId shardId) {\n- IndexRoutingTable indexRoutingTable = routingTable.index(shardId.getIndex());\n- if (indexRoutingTable != null) {\n- IndexShardRoutingTable indexShardRoutingTable = indexRoutingTable.shard(shardId.id());\n- if (indexShardRoutingTable != null) {\n- return indexShardRoutingTable.primaryShard();\n+ @Override\n+ public BatchResult<Task> execute(final ClusterState currentState, final List<Task> tasks) throws Exception {\n+ final BatchResult.Builder<Task> resultBuilder = BatchResult.<Task>builder().successes(tasks);\n+ Set<Snapshot> completedSnapshots = tasks.stream().map(e -> e.snapshot).collect(Collectors.toSet());\n+ final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n+ final RestoreInProgress restoreInProgress = currentState.custom(RestoreInProgress.TYPE);\n+ boolean changed = false;\n+ if (restoreInProgress != null) {\n+ for (RestoreInProgress.Entry entry : restoreInProgress.entries()) {\n+ if (completedSnapshots.contains(entry.snapshot()) == false) {\n+ entries.add(entry);\n+ } else {\n+ changed = true;\n }\n }\n- return null;\n }\n+ if (changed == false) {\n+ return resultBuilder.build(currentState);\n+ }\n+ RestoreInProgress updatedRestoreInProgress = new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n+ ImmutableOpenMap.Builder<String, ClusterState.Custom> builder = ImmutableOpenMap.builder(currentState.getCustoms());\n+ builder.put(RestoreInProgress.TYPE, updatedRestoreInProgress);\n+ ImmutableOpenMap<String, ClusterState.Custom> customs = builder.build();\n+ return resultBuilder.build(ClusterState.builder(currentState).customs(customs).build());\n+ }\n \n- private void notifyListeners(Snapshot snapshot, RestoreInfo restoreInfo) {\n- for (ActionListener<RestoreCompletionResponse> listener : listeners) {\n- try {\n- listener.onResponse(new RestoreCompletionResponse(snapshot, restoreInfo));\n- } catch (Exception e) {\n- logger.warn((Supplier<?>) () -> new ParameterizedMessage(\"failed to update snapshot status for [{}]\", listener), e);\n- }\n+ @Override\n+ public void onFailure(final String source, final Exception e) {\n+ logger.error((Supplier<?>) () -> new ParameterizedMessage(\"unexpected failure during [{}]\", source), e);\n+ }\n+\n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ logger.debug(\"no longer master while processing restore state update [{}]\", source);\n+ }\n+\n+ }\n+\n+ private void cleanupRestoreState(ClusterChangedEvent event) {\n+ ClusterState state = event.state();\n+\n+ RestoreInProgress restoreInProgress = state.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null) {\n+ for (RestoreInProgress.Entry entry : restoreInProgress.entries()) {\n+ if (entry.state().completed()) {\n+ assert completed(entry.shards()) : \"state says completed but restore entries are not\";\n+ clusterService.submitStateUpdateTask(\n+ \"clean up snapshot restore state\",\n+ new CleanRestoreStateTaskExecutor.Task(entry.snapshot()),\n+ ClusterStateTaskConfig.build(Priority.URGENT),\n+ cleanRestoreStateTaskExecutor,\n+ cleanRestoreStateTaskExecutor);\n }\n }\n- });\n+ }\n+ }\n+\n+ public static RestoreInProgress.State overallState(RestoreInProgress.State nonCompletedState,\n+ ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n+ boolean hasFailed = false;\n+ for (ObjectCursor<RestoreInProgress.ShardRestoreStatus> status : shards.values()) {\n+ if (!status.value.state().completed()) {\n+ return nonCompletedState;\n+ }\n+ if (status.value.state() == RestoreInProgress.State.FAILURE) {\n+ hasFailed = true;\n+ }\n+ }\n+ if (hasFailed) {\n+ return RestoreInProgress.State.FAILURE;\n+ } else {\n+ return RestoreInProgress.State.SUCCESS;\n+ }\n }\n \n- private boolean completed(ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n+ public static boolean completed(ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n for (ObjectCursor<RestoreInProgress.ShardRestoreStatus> status : shards.values()) {\n if (!status.value.state().completed()) {\n return false;\n@@ -683,7 +717,7 @@ private boolean completed(ImmutableOpenMap<ShardId, RestoreInProgress.ShardResto\n return true;\n }\n \n- private int failedShards(ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n+ public static int failedShards(ImmutableOpenMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n int failedShards = 0;\n for (ObjectCursor<RestoreInProgress.ShardRestoreStatus> status : shards.values()) {\n if (status.value.state() == RestoreInProgress.State.FAILURE) {\n@@ -727,53 +761,6 @@ private void validateSnapshotRestorable(final String repository, final SnapshotI\n }\n }\n \n- /**\n- * Checks if any of the deleted indices are still recovering and fails recovery on the shards of these indices\n- *\n- * @param event cluster changed event\n- */\n- private void processDeletedIndices(ClusterChangedEvent event) {\n- RestoreInProgress restore = event.state().custom(RestoreInProgress.TYPE);\n- if (restore == null) {\n- // Not restoring - nothing to do\n- return;\n- }\n-\n- if (!event.indicesDeleted().isEmpty()) {\n- // Some indices were deleted, let's make sure all indices that we are restoring still exist\n- for (RestoreInProgress.Entry entry : restore.entries()) {\n- List<ShardId> shardsToFail = null;\n- for (ObjectObjectCursor<ShardId, ShardRestoreStatus> shard : entry.shards()) {\n- if (!shard.value.state().completed()) {\n- if (!event.state().metaData().hasIndex(shard.key.getIndex().getName())) {\n- if (shardsToFail == null) {\n- shardsToFail = new ArrayList<>();\n- }\n- shardsToFail.add(shard.key);\n- }\n- }\n- }\n- if (shardsToFail != null) {\n- for (ShardId shardId : shardsToFail) {\n- logger.trace(\"[{}] failing running shard restore [{}]\", entry.snapshot(), shardId);\n- updateRestoreStateOnMaster(new UpdateIndexShardRestoreStatusRequest(entry.snapshot(), shardId, new ShardRestoreStatus(null, RestoreInProgress.State.FAILURE, \"index was deleted\")));\n- }\n- }\n- }\n- }\n- }\n-\n- /**\n- * Fails the given snapshot restore operation for the given shard\n- */\n- public void failRestore(Snapshot snapshot, ShardId shardId) {\n- logger.debug(\"[{}] failed to restore shard [{}]\", snapshot, shardId);\n- UpdateIndexShardRestoreStatusRequest request = new UpdateIndexShardRestoreStatusRequest(snapshot, shardId,\n- new ShardRestoreStatus(clusterService.state().nodes().getLocalNodeId(), RestoreInProgress.State.FAILURE));\n- transportService.sendRequest(clusterService.state().nodes().getMasterNode(),\n- UPDATE_RESTORE_ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);\n- }\n-\n private boolean failed(SnapshotInfo snapshot, String index) {\n for (SnapshotShardFailure failure : snapshot.shardFailures()) {\n if (index.equals(failure.index())) {\n@@ -810,34 +797,11 @@ public static void checkIndexClosing(ClusterState currentState, Set<IndexMetaDat\n }\n }\n \n- /**\n- * Adds restore completion listener\n- * <p>\n- * This listener is called for each snapshot that finishes restore operation in the cluster. It's responsibility of\n- * the listener to decide if it's called for the appropriate snapshot or not.\n- *\n- * @param listener restore completion listener\n- */\n- public void addListener(ActionListener<RestoreCompletionResponse> listener) {\n- this.listeners.add(listener);\n- }\n-\n- /**\n- * Removes restore completion listener\n- * <p>\n- * This listener is called for each snapshot that finishes restore operation in the cluster.\n- *\n- * @param listener restore completion listener\n- */\n- public void removeListener(ActionListener<RestoreCompletionResponse> listener) {\n- this.listeners.remove(listener);\n- }\n-\n @Override\n public void clusterChanged(ClusterChangedEvent event) {\n try {\n if (event.localNodeMaster()) {\n- processDeletedIndices(event);\n+ cleanupRestoreState(event);\n }\n } catch (Exception t) {\n logger.warn(\"Failed to update restore state \", t);\n@@ -1061,69 +1025,4 @@ public TimeValue masterNodeTimeout() {\n }\n \n }\n-\n- /**\n- * Internal class that is used to send notifications about finished shard restore operations to master node\n- */\n- public static class UpdateIndexShardRestoreStatusRequest extends TransportRequest {\n- private Snapshot snapshot;\n- private ShardId shardId;\n- private ShardRestoreStatus status;\n-\n- volatile boolean processed; // state field, no need to serialize\n-\n- public UpdateIndexShardRestoreStatusRequest() {\n-\n- }\n-\n- private UpdateIndexShardRestoreStatusRequest(Snapshot snapshot, ShardId shardId, ShardRestoreStatus status) {\n- this.snapshot = snapshot;\n- this.shardId = shardId;\n- this.status = status;\n- }\n-\n- @Override\n- public void readFrom(StreamInput in) throws IOException {\n- super.readFrom(in);\n- snapshot = new Snapshot(in);\n- shardId = ShardId.readShardId(in);\n- status = ShardRestoreStatus.readShardRestoreStatus(in);\n- }\n-\n- @Override\n- public void writeTo(StreamOutput out) throws IOException {\n- super.writeTo(out);\n- snapshot.writeTo(out);\n- shardId.writeTo(out);\n- status.writeTo(out);\n- }\n-\n- public Snapshot snapshot() {\n- return snapshot;\n- }\n-\n- public ShardId shardId() {\n- return shardId;\n- }\n-\n- public ShardRestoreStatus status() {\n- return status;\n- }\n-\n- @Override\n- public String toString() {\n- return \"\" + snapshot + \", shardId [\" + shardId + \"], status [\" + status.state() + \"]\";\n- }\n- }\n-\n- /**\n- * Internal class that is used to send notifications about finished shard restore operations to master node\n- */\n- class UpdateRestoreStateRequestHandler implements TransportRequestHandler<UpdateIndexShardRestoreStatusRequest> {\n- @Override\n- public void messageReceived(UpdateIndexShardRestoreStatusRequest request, final TransportChannel channel) throws Exception {\n- updateRestoreStateOnMaster(request);\n- channel.sendResponse(TransportResponse.Empty.INSTANCE);\n- }\n- }\n }", "filename": "core/src/main/java/org/elasticsearch/snapshots/RestoreService.java", "status": "modified" }, { "diff": "@@ -33,7 +33,7 @@\n import static java.util.Collections.singletonList;\n import static org.hamcrest.Matchers.contains;\n import static org.mockito.Matchers.any;\n-import static org.mockito.Matchers.anyCollectionOf;\n+import static org.mockito.Matchers.anySetOf;\n import static org.mockito.Mockito.mock;\n import static org.mockito.Mockito.when;\n \n@@ -45,7 +45,7 @@ public class MetaDataIndexAliasesServiceTests extends ESTestCase {\n \n public MetaDataIndexAliasesServiceTests() {\n // Mock any deletes so we don't need to worry about how MetaDataDeleteIndexService does its job\n- when(deleteIndexService.deleteIndices(any(ClusterState.class), anyCollectionOf(Index.class))).then(i -> {\n+ when(deleteIndexService.deleteIndices(any(ClusterState.class), anySetOf(Index.class))).then(i -> {\n ClusterState state = (ClusterState) i.getArguments()[0];\n @SuppressWarnings(\"unchecked\")\n Collection<Index> indices = (Collection<Index>) i.getArguments()[1];", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesServiceTests.java", "status": "modified" }, { "diff": "@@ -686,6 +686,46 @@ public void testDataFileFailureDuringRestore() throws Exception {\n logger.info(\"--> total number of simulated failures during restore: [{}]\", getFailureCount(\"test-repo\"));\n }\n \n+ public void testDataFileCorruptionDuringRestore() throws Exception {\n+ Path repositoryLocation = randomRepoPath();\n+ Client client = client();\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(Settings.builder().put(\"location\", repositoryLocation)));\n+\n+ prepareCreate(\"test-idx\").setSettings(Settings.builder().put(\"index.allocation.max_retries\", Integer.MAX_VALUE)).get();\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data\");\n+ for (int i = 0; i < 100; i++) {\n+ index(\"test-idx\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ }\n+ refresh();\n+ assertThat(client.prepareSearch(\"test-idx\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n+\n+ logger.info(\"--> snapshot\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).setIndices(\"test-idx\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().state(), equalTo(SnapshotState.SUCCESS));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().totalShards(), equalTo(createSnapshotResponse.getSnapshotInfo().successfulShards()));\n+\n+ logger.info(\"--> update repository with mock version\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"mock\").setSettings(\n+ Settings.builder()\n+ .put(\"location\", repositoryLocation)\n+ .put(\"random\", randomAsciiOfLength(10))\n+ .put(\"use_lucene_corruption\", true)\n+ .put(\"random_data_file_io_exception_rate\", 1.0)));\n+\n+ // Test restore after index deletion\n+ logger.info(\"--> delete index\");\n+ cluster().wipeIndices(\"test-idx\");\n+ logger.info(\"--> restore corrupt index\");\n+ RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).execute().actionGet();\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().failedShards(), equalTo(restoreSnapshotResponse.getRestoreInfo().totalShards()));\n+ }\n+\n public void testDeletionOfFailingToRecoverIndexShouldStopRestore() throws Exception {\n Path repositoryLocation = randomRepoPath();\n Client client = client();\n@@ -2199,32 +2239,6 @@ public void testBatchingShardUpdateTask() throws Exception {\n assertFalse(snapshotListener.timedOut());\n // Check that cluster state update task was called only once\n assertEquals(1, snapshotListener.count());\n-\n- logger.info(\"--> close indices\");\n- client.admin().indices().prepareClose(\"test-idx\").get();\n-\n- BlockingClusterStateListener restoreListener = new BlockingClusterStateListener(clusterService, \"restore_snapshot[\", \"update snapshot state\", Priority.HIGH);\n-\n- try {\n- clusterService.addFirst(restoreListener);\n- logger.info(\"--> restore snapshot\");\n- ListenableActionFuture<RestoreSnapshotResponse> futureRestore = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).execute();\n-\n- // Await until shard updates are in pending state.\n- assertBusyPendingTasks(\"update snapshot state\", numberOfShards);\n- restoreListener.unblock();\n-\n- RestoreSnapshotResponse restoreSnapshotResponse = futureRestore.actionGet();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), equalTo(numberOfShards));\n-\n- } finally {\n- clusterService.remove(restoreListener);\n- }\n-\n- // Check that we didn't timeout\n- assertFalse(restoreListener.timedOut());\n- // Check that cluster state update task was called only once\n- assertEquals(1, restoreListener.count());\n }\n \n public void testSnapshotName() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import java.util.concurrent.ConcurrentMap;\n import java.util.concurrent.atomic.AtomicLong;\n \n+import org.apache.lucene.index.CorruptIndexException;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.metadata.RepositoryMetaData;\n@@ -81,6 +82,8 @@ public long getFailureCount() {\n \n private final double randomDataFileIOExceptionRate;\n \n+ private final boolean useLuceneCorruptionException;\n+\n private final long maximumNumberOfFailures;\n \n private final long waitAfterUnblock;\n@@ -101,6 +104,7 @@ public MockRepository(RepositoryMetaData metadata, Environment environment) thro\n super(overrideSettings(metadata, environment), environment);\n randomControlIOExceptionRate = metadata.settings().getAsDouble(\"random_control_io_exception_rate\", 0.0);\n randomDataFileIOExceptionRate = metadata.settings().getAsDouble(\"random_data_file_io_exception_rate\", 0.0);\n+ useLuceneCorruptionException = metadata.settings().getAsBoolean(\"use_lucene_corruption\", false);\n maximumNumberOfFailures = metadata.settings().getAsLong(\"max_failure_number\", 100L);\n blockOnControlFiles = metadata.settings().getAsBoolean(\"block_on_control\", false);\n blockOnDataFiles = metadata.settings().getAsBoolean(\"block_on_data\", false);\n@@ -245,7 +249,11 @@ private void maybeIOExceptionOrBlock(String blobName) throws IOException {\n if (blobName.startsWith(\"__\")) {\n if (shouldFail(blobName, randomDataFileIOExceptionRate) && (incrementAndGetFailureCount() < maximumNumberOfFailures)) {\n logger.info(\"throwing random IOException for file [{}] at path [{}]\", blobName, path());\n- throw new IOException(\"Random IOException\");\n+ if (useLuceneCorruptionException) {\n+ throw new CorruptIndexException(\"Random corruption\", \"random file\");\n+ } else {\n+ throw new IOException(\"Random IOException\");\n+ }\n } else if (blockOnDataFiles) {\n logger.info(\"blocking I/O operation for file [{}] at path [{}]\", blobName, path());\n if (blockExecution() && waitAfterUnblock > 0) {", "filename": "core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java", "status": "modified" } ] }
{ "body": "We added a changed to disallow this in #19510, however, existing indices\nmay still be using this, so we should only disallow the setting on\nindices created after 5.0.\n\nResolves #20413\n", "comments": [ { "body": "@rjernst pushed commits following your feedback, thanks for taking a look!\n", "created_at": "2016-10-07T17:21:37Z" }, { "body": "@dakrone this should go back to 2.4.2 as well, no?\n", "created_at": "2016-10-07T17:26:29Z" }, { "body": "@clintongormley I was actually going to revert the change (https://github.com/elastic/elasticsearch/commit/f91e605c769016bd871e36bfc9709b47eacafcc2) that was on the 2.4 branch rather than backport a fix, since it was a backwards-incompatible change.\n", "created_at": "2016-10-07T17:27:45Z" }, { "body": "Rather than revert the 2.4 change, I think it should emit a deprecation warning? And also, be a no-op (it should not actually set the increment gap as it does not do anything since positions are not indexed). Actually the same comment for the current change in 5.0, I think it should warn in the \"lenient\" case?\n", "created_at": "2016-10-07T17:30:47Z" }, { "body": "@rjernst is https://github.com/elastic/elasticsearch/pull/20806/commits/38a9adedf342c852c489653ca4a8ad95adbc98e7 more like what you had in mind?\n", "created_at": "2016-10-07T17:48:59Z" }, { "body": "Yes, great, although you might change the wording of the log message to point out it is ignored.\n", "created_at": "2016-10-07T18:25:56Z" }, { "body": "Alright, I'll adjust the message, merge this, and open a PR to change the 2.4.2 behavior to be a deprecation warning instead of a hard error.\n", "created_at": "2016-10-07T19:50:59Z" } ], "number": 20806, "title": "Allow position_gap_increment for fields in indices created prior to 5.0" }
{ "body": "Instead of throwing a hard error, we should log about this and ignore\r\nthe setting.\r\n\r\nRelates to #20806\r\nRelates to #20413", "number": 20810, "review_comments": [], "title": "Log about deprecated position_increment_gap setting on non-analyzed fields" }
{ "commits": [ { "message": "Log about deprecated position_increment_gap setting on non-analyzed fields\n\nInstead of throwing a hard error, we should log about this and ignore\nthe setting.\n\nRelates to #20806" } ], "files": [ { "diff": "@@ -26,6 +26,8 @@\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -87,6 +89,8 @@ public static int positionIncrementGap(Version version) {\n \n public static class Builder extends FieldMapper.Builder<Builder, StringFieldMapper> {\n \n+ private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(StringFieldMapper.class));\n+\n protected String nullValue = Defaults.NULL_VALUE;\n \n /**\n@@ -143,12 +147,13 @@ public StringFieldMapper build(BuilderContext context) {\n }\n if (positionIncrementGap != POSITION_INCREMENT_GAP_USE_ANALYZER) {\n if (fieldType.indexOptions().compareTo(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS) < 0) {\n- throw new IllegalArgumentException(\"Cannot set position_increment_gap on field [\"\n- + name + \"] without positions enabled\");\n+ DEPRECATION_LOGGER.deprecated(\"Ignoring deprecated position_increment_gap on field [\"\n+ + name + \"] without positions enabled\");\n+ } else {\n+ fieldType.setIndexAnalyzer(new NamedAnalyzer(fieldType.indexAnalyzer(), positionIncrementGap));\n+ fieldType.setSearchAnalyzer(new NamedAnalyzer(fieldType.searchAnalyzer(), positionIncrementGap));\n+ fieldType.setSearchQuoteAnalyzer(new NamedAnalyzer(fieldType.searchQuoteAnalyzer(), positionIncrementGap));\n }\n- fieldType.setIndexAnalyzer(new NamedAnalyzer(fieldType.indexAnalyzer(), positionIncrementGap));\n- fieldType.setSearchAnalyzer(new NamedAnalyzer(fieldType.searchAnalyzer(), positionIncrementGap));\n- fieldType.setSearchQuoteAnalyzer(new NamedAnalyzer(fieldType.searchQuoteAnalyzer(), positionIncrementGap));\n }\n setupFieldType(context);\n StringFieldMapper fieldMapper = new StringFieldMapper(", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java", "status": "modified" }, { "diff": "@@ -651,40 +651,4 @@ public void testSearchAnalyzer() throws Exception {\n assertThat(e.getMessage(), equalTo(\"analyzer on field [field1] must be set when search_analyzer is set\"));\n }\n }\n-\n- public void testNonAnalyzedFieldPositionIncrement() throws IOException {\n- for (String index : Arrays.asList(\"no\", \"not_analyzed\")) {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\")\n- .field(\"type\", \"string\")\n- .field(\"index\", index)\n- .field(\"position_increment_gap\", 10)\n- .endObject().endObject().endObject().endObject().string();\n-\n- try {\n- parser.parse(\"type\", new CompressedXContent(mapping));\n- fail(\"expected failure\");\n- } catch (IllegalArgumentException e) {\n- assertEquals(\"Cannot set position_increment_gap on field [field] without positions enabled\", e.getMessage());\n- }\n- }\n- }\n-\n- public void testAnalyzedFieldPositionIncrementWithoutPositions() throws IOException {\n- for (String indexOptions : Arrays.asList(\"docs\", \"freqs\")) {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\")\n- .field(\"type\", \"string\")\n- .field(\"index_options\", indexOptions)\n- .field(\"position_increment_gap\", 10)\n- .endObject().endObject().endObject().endObject().string();\n- \n- try {\n- parser.parse(\"type\", new CompressedXContent(mapping));\n- fail(\"expected failure\");\n- } catch (IllegalArgumentException e) {\n- assertEquals(\"Cannot set position_increment_gap on field [field] without positions enabled\", e.getMessage());\n- }\n- }\n- }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java", "status": "modified" } ] }
{ "body": "In #19510 position_increment_gap is no longer allowed on fields where position are not stored.\nIt's a breaking change that prevents some indices running on 2.3.4 to be upgraded to 2.4.\n\nInitially I thought that it could break some use cases with the shingle token filter but it does not appear to be the case.\n\nThe problem here (I think) is that a breaking change has been introduced into the 2.4 branch.\n", "comments": [ { "body": "@nomoa thanks - yeah it wasn't the intention to introduce a breaking change in a minor upgrade.\n\n@rjernst we should add a bwc layer so that we can upgrade these indices. will need to apply to 5.0 as well i think (correct @jpountz ?)\n", "created_at": "2016-09-13T11:44:33Z" }, { "body": "Correct.\n", "created_at": "2016-09-23T13:56:44Z" }, { "body": "@jpountz @clintongormley what's the status on this?\n", "created_at": "2016-10-07T14:16:54Z" }, { "body": "@dakrone ^^\n", "created_at": "2016-10-07T14:17:05Z" }, { "body": "@s1monw I have not started on this, I'll start working on it today.\n", "created_at": "2016-10-07T15:55:52Z" }, { "body": "Resolved by https://github.com/elastic/elasticsearch/pull/20806 and https://github.com/elastic/elasticsearch/pull/20810\n", "created_at": "2016-10-10T16:03:58Z" }, { "body": "Thanks!\n", "created_at": "2016-10-10T16:49:35Z" } ], "number": 20413, "title": "position_increment_gap should be allowed for analyzed fields where positions are not stored" }
{ "body": "Instead of throwing a hard error, we should log about this and ignore\r\nthe setting.\r\n\r\nRelates to #20806\r\nRelates to #20413", "number": 20810, "review_comments": [], "title": "Log about deprecated position_increment_gap setting on non-analyzed fields" }
{ "commits": [ { "message": "Log about deprecated position_increment_gap setting on non-analyzed fields\n\nInstead of throwing a hard error, we should log about this and ignore\nthe setting.\n\nRelates to #20806" } ], "files": [ { "diff": "@@ -26,6 +26,8 @@\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -87,6 +89,8 @@ public static int positionIncrementGap(Version version) {\n \n public static class Builder extends FieldMapper.Builder<Builder, StringFieldMapper> {\n \n+ private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(StringFieldMapper.class));\n+\n protected String nullValue = Defaults.NULL_VALUE;\n \n /**\n@@ -143,12 +147,13 @@ public StringFieldMapper build(BuilderContext context) {\n }\n if (positionIncrementGap != POSITION_INCREMENT_GAP_USE_ANALYZER) {\n if (fieldType.indexOptions().compareTo(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS) < 0) {\n- throw new IllegalArgumentException(\"Cannot set position_increment_gap on field [\"\n- + name + \"] without positions enabled\");\n+ DEPRECATION_LOGGER.deprecated(\"Ignoring deprecated position_increment_gap on field [\"\n+ + name + \"] without positions enabled\");\n+ } else {\n+ fieldType.setIndexAnalyzer(new NamedAnalyzer(fieldType.indexAnalyzer(), positionIncrementGap));\n+ fieldType.setSearchAnalyzer(new NamedAnalyzer(fieldType.searchAnalyzer(), positionIncrementGap));\n+ fieldType.setSearchQuoteAnalyzer(new NamedAnalyzer(fieldType.searchQuoteAnalyzer(), positionIncrementGap));\n }\n- fieldType.setIndexAnalyzer(new NamedAnalyzer(fieldType.indexAnalyzer(), positionIncrementGap));\n- fieldType.setSearchAnalyzer(new NamedAnalyzer(fieldType.searchAnalyzer(), positionIncrementGap));\n- fieldType.setSearchQuoteAnalyzer(new NamedAnalyzer(fieldType.searchQuoteAnalyzer(), positionIncrementGap));\n }\n setupFieldType(context);\n StringFieldMapper fieldMapper = new StringFieldMapper(", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java", "status": "modified" }, { "diff": "@@ -651,40 +651,4 @@ public void testSearchAnalyzer() throws Exception {\n assertThat(e.getMessage(), equalTo(\"analyzer on field [field1] must be set when search_analyzer is set\"));\n }\n }\n-\n- public void testNonAnalyzedFieldPositionIncrement() throws IOException {\n- for (String index : Arrays.asList(\"no\", \"not_analyzed\")) {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\")\n- .field(\"type\", \"string\")\n- .field(\"index\", index)\n- .field(\"position_increment_gap\", 10)\n- .endObject().endObject().endObject().endObject().string();\n-\n- try {\n- parser.parse(\"type\", new CompressedXContent(mapping));\n- fail(\"expected failure\");\n- } catch (IllegalArgumentException e) {\n- assertEquals(\"Cannot set position_increment_gap on field [field] without positions enabled\", e.getMessage());\n- }\n- }\n- }\n-\n- public void testAnalyzedFieldPositionIncrementWithoutPositions() throws IOException {\n- for (String indexOptions : Arrays.asList(\"docs\", \"freqs\")) {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\")\n- .field(\"type\", \"string\")\n- .field(\"index_options\", indexOptions)\n- .field(\"position_increment_gap\", 10)\n- .endObject().endObject().endObject().endObject().string();\n- \n- try {\n- parser.parse(\"type\", new CompressedXContent(mapping));\n- fail(\"expected failure\");\n- } catch (IllegalArgumentException e) {\n- assertEquals(\"Cannot set position_increment_gap on field [field] without positions enabled\", e.getMessage());\n- }\n- }\n- }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java", "status": "modified" } ] }
{ "body": "In #19510 position_increment_gap is no longer allowed on fields where position are not stored.\nIt's a breaking change that prevents some indices running on 2.3.4 to be upgraded to 2.4.\n\nInitially I thought that it could break some use cases with the shingle token filter but it does not appear to be the case.\n\nThe problem here (I think) is that a breaking change has been introduced into the 2.4 branch.\n", "comments": [ { "body": "@nomoa thanks - yeah it wasn't the intention to introduce a breaking change in a minor upgrade.\n\n@rjernst we should add a bwc layer so that we can upgrade these indices. will need to apply to 5.0 as well i think (correct @jpountz ?)\n", "created_at": "2016-09-13T11:44:33Z" }, { "body": "Correct.\n", "created_at": "2016-09-23T13:56:44Z" }, { "body": "@jpountz @clintongormley what's the status on this?\n", "created_at": "2016-10-07T14:16:54Z" }, { "body": "@dakrone ^^\n", "created_at": "2016-10-07T14:17:05Z" }, { "body": "@s1monw I have not started on this, I'll start working on it today.\n", "created_at": "2016-10-07T15:55:52Z" }, { "body": "Resolved by https://github.com/elastic/elasticsearch/pull/20806 and https://github.com/elastic/elasticsearch/pull/20810\n", "created_at": "2016-10-10T16:03:58Z" }, { "body": "Thanks!\n", "created_at": "2016-10-10T16:49:35Z" } ], "number": 20413, "title": "position_increment_gap should be allowed for analyzed fields where positions are not stored" }
{ "body": "We added a changed to disallow this in #19510, however, existing indices\nmay still be using this, so we should only disallow the setting on\nindices created after 5.0.\n\nResolves #20413\n", "number": 20806, "review_comments": [ { "body": "created version should not be able to be null. We have numerous checks that depend on it in the base FieldMapper.\n", "created_at": "2016-10-07T17:09:11Z" }, { "body": "The point is now just to see there is no error? Then there is no reason to create a local var if there are no assertions done on it?\n", "created_at": "2016-10-07T17:11:12Z" }, { "body": "Text fields are new in 5.0, how would they be created before 5.0?\n", "created_at": "2016-10-07T17:12:06Z" }, { "body": "Alright, I'll remove those checks\n", "created_at": "2016-10-07T17:13:24Z" }, { "body": "It's more of a cautionary tale. We unfortunately currently allow people to change to created version setting on the index in some cases, and in those cases we should throw an error.\n", "created_at": "2016-10-07T17:14:19Z" }, { "body": "Ahh yeah, leftover from other times testing it, I'll remove :)\n", "created_at": "2016-10-07T17:14:43Z" }, { "body": "Hrm, except we don't as of 5.0? Index created version can only be overriden in tests.\n", "created_at": "2016-10-07T17:16:50Z" }, { "body": "And if someone is messing with this, they can deal with a failure. I don't think we should add leniency for hacking the system.\n", "created_at": "2016-10-07T17:17:45Z" }, { "body": "Ah great, I'm glad we don't allow that in 5.0! I'll remove this check then.\n", "created_at": "2016-10-07T17:20:11Z" } ], "title": "Allow position_gap_increment for fields in indices created prior to 5.0" }
{ "commits": [ { "message": "Allow position_gap_increment for fields in indices created prior to 5.0\n\nWe added a changed to disallow this in #19510, however, existing indices\nmay still be using this, so we should only disallow the setting on\nindices created after 5.0.\n\nResolves #20413" } ], "files": [ { "diff": "@@ -88,6 +88,8 @@ public static class Defaults {\n \n public static class Builder extends FieldMapper.Builder<Builder, StringFieldMapper> {\n \n+ private final DeprecationLogger deprecationLogger;\n+\n protected String nullValue = Defaults.NULL_VALUE;\n \n /**\n@@ -102,6 +104,8 @@ public static class Builder extends FieldMapper.Builder<Builder, StringFieldMapp\n public Builder(String name) {\n super(name, Defaults.FIELD_TYPE, Defaults.FIELD_TYPE);\n builder = this;\n+ Logger logger = Loggers.getLogger(getClass());\n+ this.deprecationLogger = new DeprecationLogger(logger);\n }\n \n @Override\n@@ -169,12 +173,18 @@ public StringFieldMapper build(BuilderContext context) {\n }\n if (positionIncrementGap != POSITION_INCREMENT_GAP_USE_ANALYZER) {\n if (fieldType.indexOptions().compareTo(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS) < 0) {\n- throw new IllegalArgumentException(\"Cannot set position_increment_gap on field [\"\n- + name + \"] without positions enabled\");\n+ if (context.indexCreatedVersion().onOrAfter(Version.V_5_0_0_alpha1)) {\n+ throw new IllegalArgumentException(\"Cannot set position_increment_gap on field [\"\n+ + name + \"] without positions enabled\");\n+ } else {\n+ deprecationLogger.deprecated(\"setting position_increment_gap on field [{}] without positions enabled \" +\n+ \"is deprecated and will be ignored\", name);\n+ }\n+ } else {\n+ fieldType.setIndexAnalyzer(new NamedAnalyzer(fieldType.indexAnalyzer(), positionIncrementGap));\n+ fieldType.setSearchAnalyzer(new NamedAnalyzer(fieldType.searchAnalyzer(), positionIncrementGap));\n+ fieldType.setSearchQuoteAnalyzer(new NamedAnalyzer(fieldType.searchQuoteAnalyzer(), positionIncrementGap));\n }\n- fieldType.setIndexAnalyzer(new NamedAnalyzer(fieldType.indexAnalyzer(), positionIncrementGap));\n- fieldType.setSearchAnalyzer(new NamedAnalyzer(fieldType.searchAnalyzer(), positionIncrementGap));\n- fieldType.setSearchQuoteAnalyzer(new NamedAnalyzer(fieldType.searchQuoteAnalyzer(), positionIncrementGap));\n }\n setupFieldType(context);\n return new StringFieldMapper(", "filename": "core/src/main/java/org/elasticsearch/index/mapper/StringFieldMapper.java", "status": "modified" }, { "diff": "@@ -731,9 +731,8 @@ public void testNonAnalyzedFieldPositionIncrement() throws IOException {\n .field(\"position_increment_gap\", 10)\n .endObject().endObject().endObject().endObject().string();\n \n- IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n- () -> parser.parse(\"type\", new CompressedXContent(mapping)));\n- assertEquals(\"Cannot set position_increment_gap on field [field] without positions enabled\", e.getMessage());\n+ // allowed in index created before 5.0\n+ parser.parse(\"type\", new CompressedXContent(mapping));\n }\n }\n \n@@ -746,9 +745,8 @@ public void testAnalyzedFieldPositionIncrementWithoutPositions() throws IOExcept\n .field(\"position_increment_gap\", 10)\n .endObject().endObject().endObject().endObject().string();\n \n- IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n- () -> parser.parse(\"type\", new CompressedXContent(mapping)));\n- assertEquals(\"Cannot set position_increment_gap on field [field] without positions enabled\", e.getMessage());\n+ // allowed in index created before 5.0\n+ parser.parse(\"type\", new CompressedXContent(mapping));\n }\n }\n ", "filename": "core/src/test/java/org/elasticsearch/index/mapper/LegacyStringMappingTests.java", "status": "modified" } ] }
{ "body": "Start two nodes of Elasticsearch, intentionally binding to the same port:\n\n```\n$ bin/elasticsearch -E transport.tcp.port=9300 -E node.max_local_storage_nodes=2\n```\n\nWait for this instance to start, then from another terminal:\n\n```\n$ bin/elasticsearch -E transport.tcp.port=9300 -E node.max_local_storage_nodes=2\n```\n\nThe second node will fail to bind, as expected. However, instead of displaying an already bound exception, Log4j fails in the presence of the security manager and loses the original exception instead producing:\n\n```\n2016-09-02 13:46:33,968 main ERROR An exception occurred processing Appender rolling java.security.AccessControlException: access denied (\"java.lang.RuntimePermission\" \"accessClassInPackage.sun.nio.ch\")\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)\n at java.security.AccessController.checkPermission(AccessController.java:884)\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\n at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1564)\n at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:311)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:357)\n at java.lang.Class.forName0(Native Method)\n at java.lang.Class.forName(Class.java:264)\n at org.apache.logging.log4j.util.LoaderUtil.loadClass(LoaderUtil.java:122)\n at org.apache.logging.log4j.core.util.Loader.loadClass(Loader.java:228)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:496)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:163)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:138)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:117)\n at org.apache.logging.log4j.core.impl.MutableLogEvent.getThrownProxy(MutableLogEvent.java:314)\n at org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:61)\n at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)\n at org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:294)\n at org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:195)\n at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:180)\n at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)\n at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:120)\n at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:113)\n at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:104)\n at org.apache.logging.log4j.core.appender.RollingFileAppender.append(RollingFileAppender.java:86)\n at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:155)\n at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:128)\n at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:119)\n at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)\n at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:390)\n at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:375)\n at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)\n at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:349)\n at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)\n at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)\n at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1988)\n at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1960)\n at org.apache.logging.log4j.spi.AbstractLogger.error(AbstractLogger.java:733)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:281)\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:100)\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:95)\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:88)\n at org.elasticsearch.cli.Command.main(Command.java:54)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:74)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\n```\n\nThis is due to Log4j attempting to load a class that it does not have the permissions to load and this exception goes uncaught.\n\nThis is not the only place that this occurs. Effectively, it can occur anywhere the native-like code is executing (think networking, filesystem access, etc.) and an exception is thrown (there are reports of four different examples of this already, the above is just the simplest reproduction).\n", "comments": [ { "body": "I will submit a patch to Log4j for this. Depending on their release plans, I will add a hack to get around this.\n", "created_at": "2016-09-02T17:50:13Z" }, { "body": "I submitted https://issues.apache.org/jira/browse/LOG4J2-1560 to Log4j.\n", "created_at": "2016-09-02T18:12:38Z" }, { "body": "@jasontedor do we have a workaround for this? Maybe we can remove the blocker label if that is the case?\n", "created_at": "2016-10-07T14:13:37Z" } ], "number": 20304, "title": "Log4j can lose critical exceptions" }
{ "body": "This commit upgrades the Log4j 2 dependency to version 2.7 and removes\nsome hacks that we had in place to work around bugs in Log4j 2 version\n2.6.2.\n\nCloses #20304\n", "number": 20805, "review_comments": [], "title": "Upgrade Log4j 2 to version 2.7" }
{ "commits": [ { "message": "Upgrade Log4j 2 to version 2.7\n\nThis commit upgrades the Log4j 2 dependency to version 2.7 and removes\nsome hacks that we had in place to work around bugs in Log4j 2 version\n2.6.2." } ], "files": [ { "diff": "@@ -6,7 +6,7 @@ spatial4j = 0.6\n jts = 1.13\n jackson = 2.8.1\n snakeyaml = 1.15\n-log4j = 2.6.2\n+log4j = 2.7\n slf4j = 1.6.2\n jna = 4.2.2\n ", "filename": "buildSrc/version.properties", "status": "modified" }, { "diff": "@@ -158,6 +158,10 @@ thirdPartyAudit.excludes = [\n 'com.fasterxml.jackson.databind.ObjectMapper',\n \n // from log4j\n+ 'com.beust.jcommander.IStringConverter',\n+ 'com.beust.jcommander.JCommander',\n+ 'com.conversantmedia.util.concurrent.DisruptorBlockingQueue',\n+ 'com.conversantmedia.util.concurrent.SpinPolicy',\n 'com.fasterxml.jackson.annotation.JsonInclude$Include',\n 'com.fasterxml.jackson.databind.DeserializationContext',\n 'com.fasterxml.jackson.databind.JsonMappingException',\n@@ -176,6 +180,10 @@ thirdPartyAudit.excludes = [\n 'com.fasterxml.jackson.dataformat.xml.JacksonXmlModule',\n 'com.fasterxml.jackson.dataformat.xml.XmlMapper',\n 'com.fasterxml.jackson.dataformat.xml.util.DefaultXmlPrettyPrinter',\n+ 'com.fasterxml.jackson.databind.node.JsonNodeFactory',\n+ 'com.fasterxml.jackson.databind.node.ObjectNode',\n+ 'org.fusesource.jansi.Ansi',\n+ 'org.fusesource.jansi.AnsiRenderer$Code',\n 'com.lmax.disruptor.BlockingWaitStrategy',\n 'com.lmax.disruptor.BusySpinWaitStrategy',\n 'com.lmax.disruptor.EventFactory',\n@@ -228,6 +236,8 @@ thirdPartyAudit.excludes = [\n 'org.apache.kafka.clients.producer.Producer',\n 'org.apache.kafka.clients.producer.ProducerRecord',\n 'org.codehaus.stax2.XMLStreamWriter2',\n+ 'org.jctools.queues.MessagePassingQueue$Consumer',\n+ 'org.jctools.queues.MpscArrayQueue',\n 'org.osgi.framework.AdaptPermission',\n 'org.osgi.framework.AdminPermission',\n 'org.osgi.framework.Bundle',", "filename": "core/build.gradle", "status": "modified" }, { "diff": "@@ -272,20 +272,6 @@ static void checkClass(Map<String,Path> clazzes, String clazz, Path jarpath) {\n \"class: \" + clazz + System.lineSeparator() +\n \"exists multiple times in jar: \" + jarpath + \" !!!!!!!!!\");\n } else {\n- if (clazz.startsWith(\"org.apache.logging.log4j.core.impl.ThrowableProxy\")) {\n- /*\n- * deliberate to hack around a bug in Log4j\n- * cf. https://github.com/elastic/elasticsearch/issues/20304\n- * cf. https://issues.apache.org/jira/browse/LOG4J2-1560\n- */\n- return;\n- } else if (clazz.startsWith(\"org.apache.logging.log4j.core.jmx.Server\")) {\n- /*\n- * deliberate to hack around a bug in Log4j\n- * cf. https://issues.apache.org/jira/browse/LOG4J2-1506\n- */\n- return;\n- }\n throw new IllegalStateException(\"jar hell!\" + System.lineSeparator() +\n \"class: \" + clazz + System.lineSeparator() +\n \"jar1: \" + previous + System.lineSeparator() +", "filename": "core/src/main/java/org/elasticsearch/bootstrap/JarHell.java", "status": "modified" }, { "diff": "@@ -99,7 +99,7 @@ private static void configure(final Settings settings, final Path configsPath, f\n @Override\n public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {\n if (file.getFileName().toString().equals(\"log4j2.properties\")) {\n- configurations.add((PropertiesConfiguration) factory.getConfiguration(file.toString(), file.toUri()));\n+ configurations.add((PropertiesConfiguration) factory.getConfiguration(context, file.toString(), file.toUri()));\n }\n return FileVisitResult.CONTINUE;\n }", "filename": "core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java", "status": "modified" }, { "diff": "@@ -111,18 +111,6 @@ public void testDirAndJar() throws Exception {\n }\n }\n \n- public void testLog4jThrowableProxyLeniency() throws Exception {\n- Path dir = createTempDir();\n- URL[] jars = {makeJar(dir, \"foo.jar\", null, \"org.apache.logging.log4j.core.impl.ThrowableProxy.class\"), makeJar(dir, \"bar.jar\", null, \"org.apache.logging.log4j.core.impl.ThrowableProxy.class\")};\n- JarHell.checkJarHell(jars);\n- }\n-\n- public void testLog4jServerLeniency() throws Exception {\n- Path dir = createTempDir();\n- URL[] jars = {makeJar(dir, \"foo.jar\", null, \"org.apache.logging.log4j.core.jmx.Server.class\"), makeJar(dir, \"bar.jar\", null, \"org.apache.logging.log4j.core.jmx.Server.class\")};\n- JarHell.checkJarHell(jars);\n- }\n-\n public void testWithinSingleJar() throws Exception {\n // the java api for zip file does not allow creating duplicate entries (good!) so\n // this bogus jar had to be constructed with ant", "filename": "core/src/test/java/org/elasticsearch/bootstrap/JarHellTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1 @@\n+39f4e6c2d68d4ef8fd4b0883d165682dedd5be52\n\\ No newline at end of file", "filename": "distribution/licenses/log4j-1.2-api-2.7.jar.sha1", "status": "added" }, { "diff": "@@ -0,0 +1 @@\n+8de00e382a817981b737be84cb8def687d392963\n\\ No newline at end of file", "filename": "distribution/licenses/log4j-api-2.7.jar.sha1", "status": "added" }, { "diff": "@@ -0,0 +1 @@\n+a3f2b4e64c61a7fc1ed8f1e5ba371933404ed98a\n\\ No newline at end of file", "filename": "distribution/licenses/log4j-core-2.7.jar.sha1", "status": "added" } ] }
{ "body": "Today when parsing a request, Elasticsearch silently ignores incorrect\n(including parameters with typos) or unused parameters. This is bad as\nit leads to requests having unintended behavior (e.g., if a user hits\nthe _analyze API and misspell the \"tokenizer\" then Elasticsearch will\njust use the standard analyzer, completely against intentions).\n\nThis commit removes lenient URL parameter parsing. The strategy is\nsimple: when a request is handled and a parameter is touched, we mark it\nas such. Before the request is actually executed, we check to ensure\nthat all parameters have been consumed. If there are remaining\nparameters yet to be consumed, we fail the request with a list of the\nunconsumed parameters. An exception has to be made for parameters that\nformat the response (as opposed to controlling the request); for this\ncase, handlers are able to provide a list of parameters that should be\nexcluded from tripping the unconsumed parameters check because those\nparameters will be used in formatting the response.\n\nAdditionally, some inconsistencies between the parameters in the code\nand in the docs are corrected.\n\nCloses #14719\n", "comments": [ { "body": "I'm as torn as @s1monw on versioning. I suppose getting this into 5.1 violates semver because garbage on the URL would be identified. I'd love to have it in 5.0 but I think it violates the spirit of code freeze to try and do it. It is super tempting though.\n", "created_at": "2016-10-04T12:33:20Z" }, { "body": "After conversation with @nik9000 and @s1monw, we have decided the leniency here is big enough a bug that we want to ship this code as early as possible. The options on the table were:\n- 5.0.0\n- 5.1.0\n- 6.0.0\n\nWe felt that 5.1.0 is not a viable option, it is too breaking of a change during a minor release, and waiting until 6.0.0 is waiting too long to fix this bug so we will ship this in 5.0.0.\n", "created_at": "2016-10-04T16:14:28Z" } ], "number": 20722, "title": "Remove lenient URL parameter parsing" }
{ "body": "This pull request adds notes to the breaking changes docs for #20722 and\n#20786.\n", "number": 20801, "review_comments": [ { "body": "s/accepted/ignored/ ?\n", "created_at": "2016-10-07T12:55:45Z" }, { "body": "I changed this wording, thanks @nik9000.\n", "created_at": "2016-10-07T15:08:40Z" } ], "title": "Add docs for strict REST params parsing" }
{ "commits": [ { "message": "Add breaking docs for strict REST params parsing\n\nThis commit adds a note to the breaking changes docs for the strict REST\nquery string parameter parsing change." }, { "message": "Add breaking docs for secondary preference change\n\nThis commit adds a note to the breaking changes docs for the secondary\npreference separator change." } ], "files": [ { "diff": "@@ -1,7 +1,16 @@\n-\n [[breaking_50_rest_api_changes]]\n === REST API changes\n \n+==== Strict REST query string parameter parsing\n+\n+Previous versions of Elasticsearch ignored unrecognized URL query\n+string parameters. This means that extraneous parameters or parameters\n+containing typographical errors would be silently accepted by\n+Elasticsearch. This is dangerous from an end-user perspective because it\n+means a submitted request will silently execute not as intended. This\n+leniency has been removed and Elasticsearch will now fail any request\n+that contains unrecognized query string parameters.\n+\n ==== id values longer than 512 bytes are rejected\n \n When specifying an `_id` value longer than 512 bytes, the request will be", "filename": "docs/reference/migration/migrate_5_0/rest.asciidoc", "status": "modified" }, { "diff": "@@ -217,6 +217,15 @@ been superseded by `_prefer_nodes`. By specifying a single node,\n `_prefer_nodes` provides the same functionality as `_prefer_node` but\n also supports specifying multiple nodes.\n \n+The <<search-request-preference,search preference>> `_shards` accepts a\n+secondary preference, for example `_primary` to specify the primary copy\n+of the specified shards. The separator previously used to separate the\n+`_shards` portion of the parameter from the secondary preference was\n+`;`. However, this is also an acceptable separator between query string\n+parameters which means that unless the `;` was escaped, the secondary\n+preference was never observed. The separator has been changed to `|` and\n+does not need to be escaped.\n+\n ==== Default similarity\n \n The default similarity has been changed to `BM25`.", "filename": "docs/reference/migration/migrate_5_0/search.asciidoc", "status": "modified" } ] }
{ "body": "ScriptedQuery can't be cached today and we should prevent it from happening. We might be able to relax this when we have #20762 in but for now we should just not cache\n", "comments": [], "number": 20763, "title": "Prevent ScriptedQuery form being cached in the query cache" }
{ "body": "The cache relies on the equals() method so we just need to make sure script\nqueries can never be equal, even to themselves in the case that a weight\nis used to produce a Scorer on the same segment multiple times.\n\nCloses #20763\n", "number": 20799, "review_comments": [ { "body": "I guess you have to `return this == obj` to ensure QueryUtils dont' go wild?\n", "created_at": "2016-10-07T13:18:02Z" }, { "body": "Caching could still happen in that case if you use the same weight create a Scorer on the same segment multiple times.\n", "created_at": "2016-10-07T13:31:15Z" }, { "body": "how can somebody get the SAME query unless it's the SAME query?\n", "created_at": "2016-10-07T14:21:45Z" } ], "title": "Do not cache script queries." }
{ "commits": [ { "message": "Do not cache script queries.\n\nThe cache relies on the equals() method so we just need to make sure script\nqueries can never be equals, even to themselves in the case that a weight\nis used to produce a Scorer on the same segment multiple times.\n\nCloses #20763" }, { "message": "iter" } ], "files": [ { "diff": "@@ -138,9 +138,8 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n \n static class ScriptQuery extends Query {\n \n- private final Script script;\n-\n- private final SearchScript searchScript;\n+ final Script script;\n+ final SearchScript searchScript;\n \n public ScriptQuery(Script script, SearchScript searchScript) {\n this.script = script;\n@@ -158,17 +157,23 @@ public String toString(String field) {\n \n @Override\n public boolean equals(Object obj) {\n- if (this == obj)\n+ // TODO: Do this if/when we can assume scripts are pure functions\n+ // and they have a reliable equals impl\n+ /*if (this == obj)\n return true;\n if (sameClassAs(obj) == false)\n return false;\n ScriptQuery other = (ScriptQuery) obj;\n- return Objects.equals(script, other.script);\n+ return Objects.equals(script, other.script);*/\n+ return this == obj;\n }\n \n @Override\n public int hashCode() {\n- return Objects.hash(classHash(), script);\n+ // TODO: Do this if/when we can assume scripts are pure functions\n+ // and they have a reliable equals impl\n+ // return Objects.hash(classHash(), script);\n+ return System.identityHashCode(this);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.search.Query;\n+import org.elasticsearch.index.query.ScriptQueryBuilder.ScriptQuery;\n import org.elasticsearch.script.MockScriptEngine;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n@@ -41,9 +42,19 @@ protected ScriptQueryBuilder doCreateTestQueryBuilder() {\n return new ScriptQueryBuilder(new Script(script, ScriptType.INLINE, MockScriptEngine.NAME, params));\n }\n \n+ @Override\n+ protected boolean builderGeneratesCacheableQueries() {\n+ return false;\n+ }\n+\n @Override\n protected void doAssertLuceneQuery(ScriptQueryBuilder queryBuilder, Query query, SearchContext context) throws IOException {\n assertThat(query, instanceOf(ScriptQueryBuilder.ScriptQuery.class));\n+ // make sure the query would not get cached\n+ ScriptQuery sQuery = (ScriptQuery) query;\n+ ScriptQuery clone = new ScriptQuery(sQuery.script, sQuery.searchScript);\n+ assertFalse(sQuery.equals(clone));\n+ assertFalse(sQuery.hashCode() == clone.hashCode());\n }\n \n public void testIllegalConstructorArg() {", "filename": "core/src/test/java/org/elasticsearch/index/query/ScriptQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -572,6 +572,13 @@ private static QueryBuilder parseQuery(XContentParser parser, ParseFieldMatcher\n return parseInnerQueryBuilder;\n }\n \n+ /**\n+ * Whether the queries produced by this builder are expected to be cacheable.\n+ */\n+ protected boolean builderGeneratesCacheableQueries() {\n+ return true;\n+ }\n+\n /**\n * Test creates the {@link Query} from the {@link QueryBuilder} under test and delegates the\n * assertions being made on the result to the implementing subclass.\n@@ -618,8 +625,10 @@ public void testToQuery() throws IOException {\n assertNotNull(\"toQuery should not return null\", secondLuceneQuery);\n assertLuceneQuery(secondQuery, secondLuceneQuery, searchContext);\n \n- assertEquals(\"two equivalent query builders lead to different lucene queries\",\n- rewrite(secondLuceneQuery), rewrite(firstLuceneQuery));\n+ if (builderGeneratesCacheableQueries()) {\n+ assertEquals(\"two equivalent query builders lead to different lucene queries\",\n+ rewrite(secondLuceneQuery), rewrite(firstLuceneQuery));\n+ }\n \n if (supportsBoostAndQueryName()) {\n secondQuery.boost(firstQuery.boost() + 1f + randomFloat());", "filename": "test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java", "status": "modified" } ] }
{ "body": "Today when parsing a request, Elasticsearch silently ignores incorrect\n(including parameters with typos) or unused parameters. This is bad as\nit leads to requests having unintended behavior (e.g., if a user hits\nthe _analyze API and misspell the \"tokenizer\" then Elasticsearch will\njust use the standard analyzer, completely against intentions).\n\nThis commit removes lenient URL parameter parsing. The strategy is\nsimple: when a request is handled and a parameter is touched, we mark it\nas such. Before the request is actually executed, we check to ensure\nthat all parameters have been consumed. If there are remaining\nparameters yet to be consumed, we fail the request with a list of the\nunconsumed parameters. An exception has to be made for parameters that\nformat the response (as opposed to controlling the request); for this\ncase, handlers are able to provide a list of parameters that should be\nexcluded from tripping the unconsumed parameters check because those\nparameters will be used in formatting the response.\n\nAdditionally, some inconsistencies between the parameters in the code\nand in the docs are corrected.\n\nCloses #14719\n", "comments": [ { "body": "I'm as torn as @s1monw on versioning. I suppose getting this into 5.1 violates semver because garbage on the URL would be identified. I'd love to have it in 5.0 but I think it violates the spirit of code freeze to try and do it. It is super tempting though.\n", "created_at": "2016-10-04T12:33:20Z" }, { "body": "After conversation with @nik9000 and @s1monw, we have decided the leniency here is big enough a bug that we want to ship this code as early as possible. The options on the table were:\n- 5.0.0\n- 5.1.0\n- 6.0.0\n\nWe felt that 5.1.0 is not a viable option, it is too breaking of a change during a minor release, and waiting until 6.0.0 is waiting too long to fix this bug so we will ship this in 5.0.0.\n", "created_at": "2016-10-04T16:14:28Z" } ], "number": 20722, "title": "Remove lenient URL parameter parsing" }
{ "body": "The shards preference on a search request enables specifying a list of\nshards to hit, and then a secondary preference (e.g., \"_primary\") can be\nadded. Today, the separator between the shards list and the secondary\npreference is ';'. Unfortunately, this is also a valid separtor for URL\nquery parameters. This means that a preference like \"_shards:0;_primary\"\nwill be parsed into two URL parameters: \"_shards:0\" and \"_primary\". With\nthe recent change to strict URL parsing, the second parameter will be\nrejected, \"_primary\" is not a valid URL parameter on a search\nrequest. This means that this feature has never worked (unless the ';'\nis escaped, but no one does that because our docs do not that, and there\nwas no indication from Elasticsearch that this did not work). This\ncommit changes the separator to '|'.\n\nCloses #20769, relates #20722\n", "number": 20786, "review_comments": [], "title": "Change separator for shards preference" }
{ "commits": [ { "message": "Change separator for shards preference\n\nThe shards preference on a search request enables specifying a list of\nshards to hit, and then a secondary preference (e.g., \"_primary\") can be\nadded. Today, the separator between the shards list and the secondary\npreference is ';'. Unfortunately, this is also a valid separtor for URL\nquery parameters. This means that a preference like \"_shards:0;_primary\"\nwill be parsed into two URL parameters: \"_shards:0\" and \"_primary\". With\nthe recent change to strict URL parsing, the second parameter will be\nrejected, \"_primary\" is not a valid URL parameter on a search\nrequest. This means that this feature has never worked (unless the ';'\nis escaped, but no one does that because our docs do not that, and there\nwas no indication from Elasticsearch that this did not work). This\ncommit changes the separator to '|'." } ], "files": [ { "diff": "@@ -126,7 +126,7 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index\n Preference preferenceType = Preference.parse(preference);\n if (preferenceType == Preference.SHARDS) {\n // starts with _shards, so execute on specific ones\n- int index = preference.indexOf(';');\n+ int index = preference.indexOf('|');\n \n String shards;\n if (index == -1) {", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java", "status": "modified" }, { "diff": "@@ -385,7 +385,7 @@ public void testShardsAndPreferNodeRouting() {\n assertThat(shardIterators.iterator().next().shardId().id(), equalTo(1));\n \n //check node preference, first without preference to see they switch\n- shardIterators = operationRouting.searchShards(clusterState, new String[]{\"test\"}, null, \"_shards:0;\");\n+ shardIterators = operationRouting.searchShards(clusterState, new String[]{\"test\"}, null, \"_shards:0|\");\n assertThat(shardIterators.size(), equalTo(1));\n assertThat(shardIterators.iterator().next().shardId().id(), equalTo(0));\n String firstRoundNodeId = shardIterators.iterator().next().nextOrNull().currentNodeId();\n@@ -395,12 +395,12 @@ public void testShardsAndPreferNodeRouting() {\n assertThat(shardIterators.iterator().next().shardId().id(), equalTo(0));\n assertThat(shardIterators.iterator().next().nextOrNull().currentNodeId(), not(equalTo(firstRoundNodeId)));\n \n- shardIterators = operationRouting.searchShards(clusterState, new String[]{\"test\"}, null, \"_shards:0;_prefer_nodes:node1\");\n+ shardIterators = operationRouting.searchShards(clusterState, new String[]{\"test\"}, null, \"_shards:0|_prefer_nodes:node1\");\n assertThat(shardIterators.size(), equalTo(1));\n assertThat(shardIterators.iterator().next().shardId().id(), equalTo(0));\n assertThat(shardIterators.iterator().next().nextOrNull().currentNodeId(), equalTo(\"node1\"));\n \n- shardIterators = operationRouting.searchShards(clusterState, new String[]{\"test\"}, null, \"_shards:0;_prefer_nodes:node1,node2\");\n+ shardIterators = operationRouting.searchShards(clusterState, new String[]{\"test\"}, null, \"_shards:0|_prefer_nodes:node1,node2\");\n assertThat(shardIterators.size(), equalTo(1));\n Iterator<ShardIterator> iterator = shardIterators.iterator();\n final ShardIterator it = iterator.next();", "filename": "core/src/test/java/org/elasticsearch/cluster/structure/RoutingIteratorTests.java", "status": "modified" }, { "diff": "@@ -34,7 +34,7 @@ The `preference` is a query string parameter which can be set to:\n `_shards:2,3`:: \n \tRestricts the operation to the specified shards. (`2`\n \tand `3` in this case). This preference can be combined with other\n-\tpreferences but it has to appear first: `_shards:2,3;_primary`\n+\tpreferences but it has to appear first: `_shards:2,3|_primary`\n \n `_only_nodes`::\n Restricts the operation to nodes specified in node specification", "filename": "docs/reference/search/request/preference.asciidoc", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: master\n\n**JVM version**:\n\n```\njava version \"1.8.0_77\"\nJava(TM) SE Runtime Environment (build 1.8.0_77-b03)\nJava HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)\n```\n\n**OS version**: OS X 10.11.5\n\n**Description of the problem including expected versus actual behavior**:\nTrying to run `gradle build` but getting an error instead of build output, console output below. It looks like the `org.gradle.logging.progress` package was [added in 2.14](https://github.com/gradle/gradle/commit/a8be591089bbf9df86fcc58fc155b8e1329df524) and moved the `org.gradle.logging.ProgressLogger` class in the process.\n\n**Steps to reproduce**:\n1. Install gradle 2.14\n2. Checkout `master`\n3. Run `gradle build`\n\n**Provide logs (if relevant)**:\n\n``` sh\nelasticsearch [master] $ gradle build\n:buildSrc:clean\n:buildSrc:compileJava\n:buildSrc:compileGroovy\nstartup failed:\n/Users/spalger/dev/es/elasticsearch/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy: 28: unable to resolve class org.gradle.logging.ProgressLogger\n @ line 28, column 1.\n import org.gradle.logging.ProgressLogger\n ^\n\n/Users/spalger/dev/es/elasticsearch/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy: 25: unable to resolve class org.gradle.logging.ProgressLogger\n @ line 25, column 1.\n import org.gradle.logging.ProgressLogger\n ^\n\n/Users/spalger/dev/es/elasticsearch/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy: 23: unable to resolve class org.gradle.logging.ProgressLogger\n @ line 23, column 1.\n import org.gradle.logging.ProgressLogger\n ^\n\n3 errors\n\n:buildSrc:compileGroovy FAILED\n\nFAILURE: Build failed with an exception.\n\n* What went wrong:\nExecution failed for task ':compileGroovy'.\n> Compilation failed; see the compiler error output for details.\n\n* Try:\nRun with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.\n\nBUILD FAILED\n\nTotal time: 5.056 secs\n```\n", "comments": [ { "body": "Just confirmed this is not broken in gradle 2.13\n", "created_at": "2016-06-17T03:39:31Z" }, { "body": "I have the save question as you when building with gradle\n", "created_at": "2016-06-17T06:01:21Z" }, { "body": "Related: https://discuss.gradle.org/t/gradle-2-14-breaks-plugins-using-consolerenderer/18045 .\n", "created_at": "2016-06-17T07:31:14Z" }, { "body": "This is actually worse than just the class being moved. Apparently they considered the package org.gradle.logging to be \"internal\", and in 2.14 internal classes are finally not available to plugins (and this class move makes it truly internal). So until they add back ProgressLogger as part of the plugin API, all our nice logging would disappear...\n\nI'm going to add a check for now in BuildPlugin init that the gradle version is equal exactly to 2.13...\n", "created_at": "2016-06-18T17:19:08Z" }, { "body": "Ah and of course this is hard to check for 2.13 because the failure happens inside buildSrc before we even get to check the gradle version...\n", "created_at": "2016-06-18T17:21:20Z" }, { "body": "I opened #18955 as a stopgap so at least the error message is clear when trying to use 2.14\n", "created_at": "2016-06-18T17:29:37Z" }, { "body": "Due to gradle core developer Adrian Kelly\n\nhttps://discuss.gradle.org/t/bug-in-gradle-2-14-rc1-no-service-of-type-styledtextoutputfactory/17638/3\n\nthere is no big chance that ProgressLogger will be available (again). So my suggestion is to adapt to Gradle 2.14 (including upcoming Gradle 3) as soon as possible by aligning the Elasticsearch build scripts/plugins to the reduced capabilities in https://docs.gradle.org/current/userguide/logging.html\n", "created_at": "2016-06-24T13:28:15Z" }, { "body": "Any chance to reconsider https://github.com/elastic/elasticsearch/pull/13744 due to this issue here?\nI'm not sure if keeping a 50kb blob out of the repo is worth forcing potential contributors to either downgrade system gradle or start keeping around a bunch of gradle versions that happen to work with ES.\n", "created_at": "2016-07-13T21:48:32Z" }, { "body": "@jprante \n\n> there is no big chance that ProgressLogger will be available (again)\n\nThat is simply not true. I spoke with developers at gradle during Gradle Summit and they understand that progress logger is important. I expect it to come back, in some form, in the future:\nhttps://discuss.gradle.org/t/can-gradle-team-add-progresslogger-and-progressloggerfactory-to-public-api/18176/6\n\n@mfussenegger \n\n> Any chance to reconsider #13744 due to this issue here?\n\nThe size is not the issue there. It is that we do not want _binary_ blobs in our repo. I would be ok with a custom equivalent of the gradle wrapper that depended on java 8 and jjs to download the gradle binary, but I have not investigated the real feasibility of such a solution. In the meantime, you don't need to manage \"a bunch\" of versions, just two, 2.13 and whatever other version you are on. You can add your own gradle wrapper file then that just runs gradle 2.13 wherever it is on your system. I would even be ok with adding this to the gitignore so that you can update the repo without it looking like some outlier file.\n", "created_at": "2016-07-13T22:19:54Z" }, { "body": "> It is that we do not want binary blobs in our repo.\n\nAren't the zip files for bwc testing also binary files?\n\n> In the meantime, you don't need to manage \"a bunch\" of versions, just two, 2.13 \n\nI'm probably being a bit too pessimistic here and exaggerating.\nAnyway, it's not much of a problem for me personally. Just wanted to bring it up because it definetly _is_ a stepping stone.\n", "created_at": "2016-07-13T22:39:38Z" }, { "body": "I think it would be helpful to add the requirement for Gradle 2.13 to the docs for contribution and to make it more explicit that it is required in the main readme. Currently the readme says: \n\n> You’ll need to have a modern version of Gradle installed – 2.13 should do.\n\nWhich makes it sound like 2.13 or upwards is fine. \n\nThere's no mention of version on the contribution doc.\n\nIt's only a small issue and the error message makes it very clear what has gone wrong, but it could save the time of people like me, as I just downloaded the latest version of Gradle purely for the sake of contributing to the project.\n\nI'd be happy to make the change myself since I was after something simple first anyway. Is it precisely version 2.13 that works, or can slightly older Gradle versions work too? \n", "created_at": "2016-10-03T17:40:22Z" }, { "body": "@manterfield Please do make a PR! I agree we should update the readme/contrib doc wording given our current limitation.\n", "created_at": "2016-10-03T17:41:57Z" }, { "body": "And it must be 2.13 at this time.\n", "created_at": "2016-10-03T17:42:29Z" }, { "body": "Thanks @rjernst, made a PR (#20776) with doc updates in. \n", "created_at": "2016-10-06T10:20:58Z" }, { "body": "Closed by #22669. The docs will be updated once we have moved our builds to use Gradle 3.x and feel comfortable removing support for 2.13.", "created_at": "2017-01-19T11:20:27Z" }, { "body": "Sorry, this has to be reopened, IntelliJ is unhappy with the change.", "created_at": "2017-01-20T23:18:36Z" }, { "body": "Pushed a fix for IntelliJ.", "created_at": "2017-01-24T13:04:17Z" } ], "number": 18935, "title": "Gradle 2.14 compatibility?" }
{ "body": "This is as mentioned in comments for #18935 \n\nI've updated docs on the main readme and the contribution readme to state the precise version of gradle required to build the project.\n", "number": 20776, "review_comments": [], "title": "Updated docs to include precise version of gradle required" }
{ "commits": [ { "message": "Updated documentation to include precise version of gradle currently required for building" } ], "files": [ { "diff": "@@ -88,7 +88,8 @@ Contributing to the Elasticsearch codebase\n **Repository:** [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch)\n \n Make sure you have [Gradle](http://gradle.org) installed, as\n-Elasticsearch uses it as its build system.\n+Elasticsearch uses it as its build system. Gradle must be version 2.13 _exactly_ in\n+order to build successfully.\n \n Eclipse users can automatically configure their IDE: `gradle eclipse`\n then `File: Import: Existing Projects into Workspace`. Select the", "filename": "CONTRIBUTING.md", "status": "modified" }, { "diff": "@@ -200,7 +200,7 @@ We have just covered a very small portion of what Elasticsearch is all about. Fo\n \n h3. Building from Source\n \n-Elasticsearch uses \"Gradle\":https://gradle.org for its build system. You'll need to have a modern version of Gradle installed - 2.13 should do.\n+Elasticsearch uses \"Gradle\":https://gradle.org for its build system. You'll need to have version 2.13 of Gradle installed.\n \n In order to create a distribution, simply run the @gradle assemble@ command in the cloned directory.\n ", "filename": "README.textile", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue. Note that whether you're filing a bug report or a\nfeature request, ensure that your submission is for an\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\nBug reports on an OS that we do not support or feature requests\nspecific to an OS that we do not support will be closed.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 5.0.0-alpha5\n\n**Plugins installed**: [] x-pack\n\n**JVM version**:1.8.0_101\n\n**OS version**: Ubuntu 14.04\n\n**Description of the problem including expected versus actual behavior**:\n\n**Steps to reproduce**:\n\nIt is possible to use special characters, e.g. `*` in aliases which are not allowed in Index names. I think this is a bug as it can lead to very confusing states - below is an example:\n\n```\nPUT /foo\nPUT /bar\n\n# This will fail with invalid_index_name_exception\nPUT /foo*\n\nPUT /foo/bar/1\n{\n \"message\": \"Indexed into /foo/bar/1\"\n}\n\nPUT /bar/bar/1\n{\n \"message\":\"Idexed into /bar/bar/1\"\n}\n\n# This will fail\u0001 with invalid_index_name_exception\nPUT /bar*/bar/2\n{\n \"message\":\"Indexed into /bar*/bar/2\"\n}\n\nPOST /_aliases\n{\n \"actions\":[\n {\"add\": { \"index\": \"foo\", \"alias\":\"bar*\" } }\n ]\n}\n\n# Now this will be successful - confusing!\nPUT /bar*/bar/2\n{\n \"message\":\"Indexed into /bar*/bar/2\"\n}\n\n# A search on indexes /bar* responds with results both from the index `bar` and from `foo` as a result of the alias.\nGET /bar*/_search\n{\n \"took\": 3,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 10,\n \"successful\": 10,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 3,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"foo\",\n \"_type\": \"bar\",\n \"_id\": \"2\",\n \"_score\": 1,\n \"_source\": {\n \"message\": \"Indexed into /bar*/bar/2\"\n }\n },\n {\n \"_index\": \"bar\",\n \"_type\": \"bar\",\n \"_id\": \"1\",\n \"_score\": 1,\n \"_source\": {\n \"message\": \"Idexed into /bar/bar/1\"\n }\n },\n {\n \"_index\": \"foo\",\n \"_type\": \"bar\",\n \"_id\": \"1\",\n \"_score\": 1,\n \"_source\": {\n \"message\": \"Indexed into /foo/bar/1\"\n }\n }\n ]\n }\n}\n```\n", "comments": [], "number": 20748, "title": "Special characters allowed for aliases which are not allowed in index names" }
{ "body": "Applied (almost) the same rules we use to validate index names\nto new alias names. The only rule not applies it \"must be lowercase\".\nWe have tests that don't follow that rule and I except there are lots\nof examples of camelCase alias names in the wild. We can add that\nvalidation but I'm not sure it is worth it.\n\nCloses #20748\n", "number": 20771, "review_comments": [ { "body": "do we have to use `Version#fromString`? Can't we reference some version constant?\n", "created_at": "2016-10-24T07:53:56Z" }, { "body": "shall we check that we get some results back? Just making sure that we actually search against the alias. If it doesn't exist we could get back empty results without any error...\n", "created_at": "2016-10-24T07:55:30Z" }, { "body": "add a comment about the fact that we are not requiring aliases to be lowercase? Maybe open an issue to remove that too in tue future? It's really bad to apply different rules for aliases and indices.\n", "created_at": "2016-10-24T07:56:22Z" }, { "body": "so we can assert _that_ we can still use it\n", "created_at": "2016-10-24T07:56:58Z" }, { "body": "It asserts the hit count now. Is that enough?\n", "created_at": "2016-10-25T12:36:25Z" }, { "body": "Did you mean to push some commit? I don't see any change here\n", "created_at": "2016-10-25T13:39:50Z" }, { "body": "I add a comment and can open a followup discuss issue, sure!\n", "created_at": "2016-10-25T17:37:44Z" }, { "body": "The version isn't released. I'll push a patch to make it more clear what is up.\n", "created_at": "2016-10-25T19:01:32Z" }, { "body": "It already asserted it.\n", "created_at": "2016-10-28T21:11:05Z" }, { "body": "you are asserting that the two hit counts returned are the same. Couldn't they be both `0` in theory?\n", "created_at": "2016-11-01T12:58:18Z" } ], "title": "Validate alias names the same as index names" }
{ "commits": [ { "message": "Validate alias names the same as index names\n\nApplied (almost) the same rules we use to validate index names\nto new alias names. The only rule not applies it \"must be lowercase\".\nWe have tests that don't follow that rule and I except there are lots\nof examples of camelCase alias names in the wild. We can add that\nvalidation but I'm not sure it is worth it.\n\nCloses #20748\n\nAdds an alias that starts with `#` to the BWC index and validates\nthat you can read from it and remove it. Starting with `#` isn't\nallowed after 5.1.0/6.0.0 so we don't create the alias or check it\nafter those versions." } ], "files": [ { "diff": "@@ -33,6 +33,7 @@ dependency-reduced-pom.xml\n # testing stuff\n **/.local*\n .vagrant/\n+/logs/\n \n # osx stuff\n .DS_Store", "filename": ".gitignore", "status": "modified" }, { "diff": "@@ -99,10 +99,11 @@ public void validateAlias(String alias, String index, @Nullable String indexRout\n }\n }\n \n- private void validateAliasStandalone(String alias, String indexRouting) {\n+ void validateAliasStandalone(String alias, String indexRouting) {\n if (!Strings.hasText(alias)) {\n throw new IllegalArgumentException(\"alias name is required\");\n }\n+ MetaDataCreateIndexService.validateIndexOrAliasName(alias, InvalidAliasNameException::new);\n if (indexRouting != null && indexRouting.indexOf(',') != -1) {\n throw new IllegalArgumentException(\"alias [\" + alias + \"] has several index routing values associated with it\");\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/AliasValidator.java", "status": "modified" }, { "diff": "@@ -88,6 +88,7 @@\n import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.function.BiFunction;\n import java.util.function.Predicate;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS;\n@@ -127,24 +128,37 @@ public MetaDataCreateIndexService(Settings settings, ClusterService clusterServi\n this.activeShardsObserver = new ActiveShardsObserver(settings, clusterService, threadPool);\n }\n \n+ /**\n+ * Validate the name for an index against some static rules and a cluster state.\n+ */\n public static void validateIndexName(String index, ClusterState state) {\n+ validateIndexOrAliasName(index, InvalidIndexNameException::new);\n+ if (!index.toLowerCase(Locale.ROOT).equals(index)) {\n+ throw new InvalidIndexNameException(index, \"must be lowercase\");\n+ }\n if (state.routingTable().hasIndex(index)) {\n throw new IndexAlreadyExistsException(state.routingTable().index(index).getIndex());\n }\n if (state.metaData().hasIndex(index)) {\n throw new IndexAlreadyExistsException(state.metaData().index(index).getIndex());\n }\n+ if (state.metaData().hasAlias(index)) {\n+ throw new InvalidIndexNameException(index, \"already exists as alias\");\n+ }\n+ }\n+\n+ /**\n+ * Validate the name for an index or alias against some static rules.\n+ */\n+ public static void validateIndexOrAliasName(String index, BiFunction<String, String, ? extends RuntimeException> exceptionCtor) {\n if (!Strings.validFileName(index)) {\n- throw new InvalidIndexNameException(index, \"must not contain the following characters \" + Strings.INVALID_FILENAME_CHARS);\n+ throw exceptionCtor.apply(index, \"must not contain the following characters \" + Strings.INVALID_FILENAME_CHARS);\n }\n if (index.contains(\"#\")) {\n- throw new InvalidIndexNameException(index, \"must not contain '#'\");\n+ throw exceptionCtor.apply(index, \"must not contain '#'\");\n }\n if (index.charAt(0) == '_' || index.charAt(0) == '-' || index.charAt(0) == '+') {\n- throw new InvalidIndexNameException(index, \"must not start with '_', '-', or '+'\");\n- }\n- if (!index.toLowerCase(Locale.ROOT).equals(index)) {\n- throw new InvalidIndexNameException(index, \"must be lowercase\");\n+ throw exceptionCtor.apply(index, \"must not start with '_', '-', or '+'\");\n }\n int byteCount = 0;\n try {\n@@ -154,15 +168,10 @@ public static void validateIndexName(String index, ClusterState state) {\n throw new ElasticsearchException(\"Unable to determine length of index name\", e);\n }\n if (byteCount > MAX_INDEX_NAME_BYTES) {\n- throw new InvalidIndexNameException(index,\n- \"index name is too long, (\" + byteCount +\n- \" > \" + MAX_INDEX_NAME_BYTES + \")\");\n- }\n- if (state.metaData().hasAlias(index)) {\n- throw new InvalidIndexNameException(index, \"already exists as alias\");\n+ throw exceptionCtor.apply(index, \"index name is too long, (\" + byteCount + \" > \" + MAX_INDEX_NAME_BYTES + \")\");\n }\n if (index.equals(\".\") || index.equals(\"..\")) {\n- throw new InvalidIndexNameException(index, \"must not be '.' or '..'\");\n+ throw exceptionCtor.apply(index, \"must not be '.' or '..'\");\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -33,6 +33,10 @@ public InvalidAliasNameException(Index index, String name, String desc) {\n setIndex(index);\n }\n \n+ public InvalidAliasNameException(String name, String description) {\n+ super(\"Invalid alias name [{}]: {}\", name, description);\n+ }\n+\n public InvalidAliasNameException(StreamInput in) throws IOException{\n super(in);\n }", "filename": "core/src/main/java/org/elasticsearch/indices/InvalidAliasNameException.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.util.LuceneTestCase;\n import org.apache.lucene.util.TestUtil;\n import org.elasticsearch.Version;\n+import org.elasticsearch.VersionTests;\n import org.elasticsearch.action.admin.indices.get.GetIndexResponse;\n import org.elasticsearch.action.admin.indices.recovery.RecoveryResponse;\n import org.elasticsearch.action.admin.indices.segments.IndexSegments;\n@@ -84,6 +85,7 @@\n import static org.elasticsearch.test.OldIndexUtils.assertUpgradeWorks;\n import static org.elasticsearch.test.OldIndexUtils.getIndexDir;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n \n // needs at least 2 nodes since it bumps replicas to 1\n@@ -244,6 +246,7 @@ void assertOldIndexWorks(String index) throws Exception {\n assertUpgradeWorks(client(), indexName, version);\n assertDeleteByQueryWorked(indexName, version);\n assertPositionIncrementGapDefaults(indexName, version);\n+ assertAliasWithBadName(indexName, version);\n unloadIndex(indexName);\n }\n \n@@ -429,6 +432,31 @@ void assertPositionIncrementGapDefaults(String indexName, Version version) throw\n }\n }\n \n+ private static final Version VERSION_5_1_0_UNRELEASED = Version.fromString(\"5.1.0\");\n+\n+ public void testUnreleasedVersion() {\n+ VersionTests.assertUnknownVersion(VERSION_5_1_0_UNRELEASED);\n+ }\n+\n+ /**\n+ * Search on an alias that contains illegal characters that would prevent it from being created after 5.1.0. It should still be\n+ * search-able though.\n+ */\n+ void assertAliasWithBadName(String indexName, Version version) throws Exception {\n+ if (version.onOrAfter(VERSION_5_1_0_UNRELEASED)) {\n+ return;\n+ }\n+ // We can read from the alias just like we can read from the index.\n+ String aliasName = \"#\" + indexName;\n+ long totalDocs = client().prepareSearch(indexName).setSize(0).get().getHits().totalHits();\n+ assertHitCount(client().prepareSearch(aliasName).setSize(0).get(), totalDocs);\n+ assertThat(totalDocs, greaterThanOrEqualTo(2000L));\n+\n+ // We can remove the alias.\n+ assertAcked(client().admin().indices().prepareAliases().removeAlias(indexName, aliasName).get());\n+ assertFalse(client().admin().indices().prepareAliasesExist(aliasName).get().exists());\n+ }\n+\n private Path getNodeDir(String indexFile) throws IOException {\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");", "filename": "core/src/test/java/org/elasticsearch/bwcompat/OldIndexBackwardsCompatibilityIT.java", "status": "modified" }, { "diff": "@@ -0,0 +1,47 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.metadata;\n+\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.indices.InvalidAliasNameException;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import static org.hamcrest.Matchers.startsWith;\n+\n+public class AliasValidatorTests extends ESTestCase {\n+ public void testValidatesAliasNames() {\n+ AliasValidator validator = new AliasValidator(Settings.EMPTY);\n+ Exception e = expectThrows(InvalidAliasNameException.class, () -> validator.validateAliasStandalone(\".\", null));\n+ assertEquals(\"Invalid alias name [.]: must not be '.' or '..'\", e.getMessage());\n+ e = expectThrows(InvalidAliasNameException.class, () -> validator.validateAliasStandalone(\"..\", null));\n+ assertEquals(\"Invalid alias name [..]: must not be '.' or '..'\", e.getMessage());\n+ e = expectThrows(InvalidAliasNameException.class, () -> validator.validateAliasStandalone(\"_cat\", null));\n+ assertEquals(\"Invalid alias name [_cat]: must not start with '_', '-', or '+'\", e.getMessage());\n+ e = expectThrows(InvalidAliasNameException.class, () -> validator.validateAliasStandalone(\"-cat\", null));\n+ assertEquals(\"Invalid alias name [-cat]: must not start with '_', '-', or '+'\", e.getMessage());\n+ e = expectThrows(InvalidAliasNameException.class, () -> validator.validateAliasStandalone(\"+cat\", null));\n+ assertEquals(\"Invalid alias name [+cat]: must not start with '_', '-', or '+'\", e.getMessage());\n+ e = expectThrows(InvalidAliasNameException.class, () -> validator.validateAliasStandalone(\"c*t\", null));\n+ assertThat(e.getMessage(), startsWith(\"Invalid alias name [c*t]: must not contain the following characters \"));\n+\n+ // Doesn't throw an exception because we allow upper case alias names\n+ validator.validateAliasStandalone(\"CAT\", null);\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/AliasValidatorTests.java", "status": "added" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.0.0-beta1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.0.0-beta2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.0.0-rc1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.0.0.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.0.1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.0.2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.1.0.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.1.1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.1.2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.2.0.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.2.1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.2.2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.3.0.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.3.1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.3.2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.3.3.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.3.4.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.3.5.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.4.0.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-2.4.1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/index-5.0.0.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/repo-2.0.0-beta1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/repo-2.0.0-beta2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/indices/bwc/repo-2.0.0-rc1.zip", "status": "modified" } ] }
{ "body": "Updating a stored script doesn't invalidate terms aggs that are using this script. As a result these aggs can produce stale data. To reproduce, start a single elasticsearch node (5.0.0-beta1 or above) with the following settings: \n\n```\nindices.queries.cache.all_segments: true\nscript.engine.painless.stored: true\n```\n\nthen run the following script:\n\n```\nDELETE test\n\nPUT _scripts/painless/test\n{\n \"script\": \"1\"\n}\n\nPUT test/doc/1?refresh\n{\n \"foo\": 10\n}\n\nGET test/doc/_search\n{\n \"size\": 0,\n \"aggs\": {\n \"test\": {\n \"terms\": {\n \"script\": {\n \"id\": \"test\",\n \"lang\": \"painless\"\n }\n }\n }\n }\n}\n\nPUT _scripts/painless/test\n{\n \"script\": \"2\"\n}\n\nGET test/doc/_search\n{\n \"size\": 0,\n \"aggs\": {\n \"test\": {\n \"terms\": {\n \"script\": {\n \"id\": \"test\",\n \"lang\": \"painless\"\n }\n }\n }\n }\n}\n```\n\nthe expected result of the last command:\n\n```\n...\n \"buckets\": [\n {\n \"key\": \"2\",\n \"doc_count\": 1\n }\n ]\n...\n```\n\nthe actual result of the last command:\n\n```\n...\n \"buckets\": [\n {\n \"key\": \"1\",\n \"doc_count\": 1\n }\n ]\n...\n```\n\nYou can get the expected result if you run `POST test/_cache/clear` after before running the aggregation. \n\nConsidering that we cannot always guarantee that scripts don't depend on external factors such as time or external data sources, I am not sure if we should cache results of aggregations that are using scripts by default. \n", "comments": [ { "body": "Agreed, we shouldn't cache this at all. There are two caches we need to take into account here, the request cache (which cached the search request in this case) and the query cache (if the `script` query is used).\n\nWe could in `IndicesService#canCache(...)` introspect the aggregation and query builder and decide to not cache if we encounter a file (the file can still be updated after node startup) or stored script.\n\nAlternatively we could also during query builder rewrite resolve file and stored scripts and replace them with inline scripts. I think this is nicer as it fixes this problem for both request and query cache. Also ScriptService can then become simpler as it no longer has to worry where the content of the script comes from.\n", "created_at": "2016-09-24T08:13:41Z" }, { "body": "> Alternatively we could also during query builder rewrite resolve file and stored scripts and replace them with inline scripts.\n\nYes, but caching of inline scripts only works for scripts that are pure functions, which doesn't necessary true even for painless scripts that have access to `java.time.Instant.now()` for example. \n", "created_at": "2016-09-24T10:13:36Z" }, { "body": "Right, I didn't think of that. I think we have then no other alternative then completely opting out of caching if scripts or being used in a search request. \n", "created_at": "2016-09-24T10:44:24Z" }, { "body": "_technically_ we can detect that sort of thing with painless.... Should we\ndisable caching for any request with a script and then slowly drag things\nback? Like expressions can be cached, painless without `now()`. Maybe we\ncan't ever cache groovy.\n\nOr do we tell users to opt requests out of the cache manually if they have\na script?\n\nOn Sat, Sep 24, 2016 at 6:44 AM, Martijn van Groningen <\nnotifications@github.com> wrote:\n\n> Right, I didn't think of that. I think we have then no other alternative\n> then completely opting out of caching if scripts or being used in a search\n> request.\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> https://github.com/elastic/elasticsearch/issues/20645#issuecomment-249358333,\n> or mute the thread\n> https://github.com/notifications/unsubscribe-auth/AANLog-dx25jvTPVpje3WodSBENWbiNWks5qtP8MgaJpZM4KFPE_\n> .\n", "created_at": "2016-09-24T18:22:13Z" }, { "body": "The simplest solution might be to disable caching queries with scripts by default, but give users an ability to opt-in for caching if they know that it is acceptable for their use-case. A more sophisticated solution would be to ask script engine service to return some sort of a flag for a script that would indicate if the script is a pure function or not and then we can decide if we want to cache it or not based on that.\n", "created_at": "2016-09-24T18:55:18Z" }, { "body": "I think for now we should prevent caching if scripts are involved here. We even should not do it if users ask for it explicitly. Down the road we should require our scripts to be functions. There should not be access to APIs that are not idempotent. We can still allow access to `NOW` since we detect that or at least can detect it. Once we can ensure that we can allow caching for certain scripts (painless only?)\n", "created_at": "2016-09-26T08:47:45Z" }, { "body": "> Once we can ensure that we can allow caching for certain scripts (painless only?)\n\nI think allowing script engine to identify if a script is a pure function or not is not going to be much more complex than implementing painless-only solution, but it will also benefit plugins like PMML which always generates and executes pure function scripts. \n", "created_at": "2016-09-26T12:55:55Z" }, { "body": "I don't understand why PMML will benefit from a generic solution? \nWe need a solution for this for 5.0 and I am leaning towards just disabling cache if any script is involved. The only safe exception is expressions. I am curious if @jdconrad or @rjernst has ideas here.\n", "created_at": "2016-09-29T07:42:35Z" }, { "body": "I agree that the most sensible option for 5.0 is to disable cache if any scripts are involved. My comment was about your proposal to re-enable caching for some scripts in the future. My understanding was that you are proposing to enable this only for painless. My proposal was to allow any script engine to let the caching system know if scripts are pure functions or not. I envision it as a method of a script engine that in case of painless will return `false` if access to `now` is detected and `true` otherwise. In case of script engines like PMML it will also return `true` because all PMML scripts are pure functions by definition.\n", "created_at": "2016-09-29T07:51:46Z" }, { "body": "I think we have different understandings when it comes to PMML I don't see why it needs to be it's own script engine. Can't it just produce scripts using painless iff it really needs a script? \n", "created_at": "2016-09-29T07:55:51Z" }, { "body": "Yes, it doesn't have to be its own script engine (if we could add some ML-specific math functions to painless and if we are doing PMML at all). I am just proposing a clean generic solution that will work not only for painless but also for any script engine that can guarantee that its scripts are pure functions as in the case of the current implementation of PMML. \n", "created_at": "2016-09-29T08:03:08Z" }, { "body": "So, just to make sure I understand, there are two caching issues here -- one is related to queries being cached with scripts that are potentially updated in the background (stored/file), the other is related to non-idempotent whitelisted methods. For the first we need to check to make sure the stored/file script is the same as the one in the cached request, I don't know who the responsibility should belong to for invalidation, but I guess it would be a callback to a listener coming out of the ScriptService every time a new stored/file script is cached. For the second one, I need to go through the whitelist for Painless and make sure that it is only the now methods of Instant that aren't idempotent. Currently, 'now' is already broken since the script will be calling it at slightly different times for each document. @rjernst can probably better explain a possible solution at least for this last part.\n\nEdit: For 5.0, I completely agree with turning off support for caching involving scripts. I think other solutions are longer term and need to be fully thought through.\n\nSecond Edit: @imotov pointed out that I don't mean idempotent, but rather a pure function.\n", "created_at": "2016-09-29T17:32:02Z" }, { "body": "your understanding is correct @jdconrad \n", "created_at": "2016-09-29T18:16:11Z" }, { "body": "A thought I had while reading along (for the easiest problems of those above) - should we remove support for `now()` from painless and only support it as a script parameter? seems like it will give people what they want and be consistent (across shards?)/cacheable?\n", "created_at": "2016-10-03T12:22:28Z" }, { "body": "@bleskes This is the correct way to do this. We can even leave the method in the whitelist and just specialize on that specific method as it seems more intuitive than pulling 'now' out of a params map.\n", "created_at": "2016-10-03T16:39:17Z" }, { "body": "So, like, for painless we detect that someone wants `now` and if they do then we stick it in the params map? That sounds good, though we'll need to do some work to make sure that this properly dodges the request cache.\n", "created_at": "2016-10-03T16:42:31Z" }, { "body": "I actually like the idea more of having a special variable, instead of calling `now()`, as it makes it clear the value will not change within the request (for example, using in two different clauses within a script).\n\nThe only caveat is this will require some distributed changes, as we need to create this timestamp on coordinating node, and send it along with shard request, and then allow scripting engines to pull it in when a script uses `now`.\n", "created_at": "2016-10-03T16:44:19Z" }, { "body": "One upside of a parameter is that it can be set on the coordinating node which means now is the same across shard. Not sure if we can do that with painless magic. Another upside is that it will be available in expressions as well.\n\n> On 3 Oct 2016, at 6:42 PM, Nik Everett notifications@github.com wrote:\n> \n> So, like, for painless we detect that someone wants now and if they do then we stick it in the params map? That sounds good, though we'll need to do some work to make sure that this properly dodges the request cache.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "created_at": "2016-10-03T16:47:36Z" }, { "body": "@rjernst Fair enough about the special variable. Do you care if it's a variable of a method?\n", "created_at": "2016-10-03T17:17:54Z" }, { "body": "I think a variable implies more that the value will not change between uses, vs a method, especially one called `now()` might be confusing. Perhaps we could at least change to a variable to start, with painless setting it equal to a `now()` call during parameter setup (so it would not be per document, but once when binding the script for a shard), and then follow up to make this value in a future change consistent across shards.\n", "created_at": "2016-10-03T17:29:08Z" }, { "body": "Yeah, sounds good.\n", "created_at": "2016-10-03T17:30:28Z" }, { "body": "Another question I had is the following - it seems to me that we should resolve stored scripts on the coordinating node as well. If that's the case, and we took care of the now(), is there anything else that can render a script not cachable? I think we're good there?\n", "created_at": "2016-10-03T17:54:07Z" }, { "body": "> it seems to me that we should resolve stored scripts on the coordinating node as well\n\nNot sure about this; it would mean passing along the source of the stored script to shards, which effectively becomes an inline script. I think that would be hard to manage, give the separate security settings for disabling inline scripts vs stored scripts, and there would be no way for shards to understand the difference (that could not also be used by someone trying to circumvent the disabling of inline scripts).\n\n> is there anything else that can render a script not cachable?\n\nAnything that is not idempotent, but I don't know of anything else at this time.\n", "created_at": "2016-10-03T17:56:32Z" }, { "body": "I think the security aspect is the same, regardless if it's done on shard or the coordinating node?\n\nRe transfer of the whole script - I agree it's a shame if the script is large. On the other hand it also provides consistency. We have no guarantee that all shards will have the same script if it is concurrently changed. Seems like all in all it is the easiest to do? \n\n> On 3 Oct 2016, at 7:56 PM, Ryan Ernst notifications@github.com wrote:\n> \n> it seems to me that we should resolve stored scripts on the coordinating node as well\n> \n> Not sure about this; it would mean passing along the source of the stored script to shards, which effectively becomes an inline script. I think that would be hard to manage, give the separate security settings for disabling inline scripts vs stored scripts, and there would be no way for shards to understand the difference (that could not also be used by someone trying to circumvent the disabling of inline scripts).\n> \n> is there anything else that can render a script not cachable?\n> \n> Anything that is not idempotent, but I don't know of anything else at this time.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "created_at": "2016-10-03T18:14:47Z" }, { "body": "> I think the security aspect is the same, regardless if it's done on shard or the coordinating node?\n\nI don't think so? Inline scripting could be disabled, with a block on putting stored scripts (for example, putting them into cluster state while \"closed off\" from the outside world, and then restarting with a block), so you effectively have read only stored scripts.\n", "created_at": "2016-10-03T18:18:10Z" }, { "body": "I spoke with @rjernst and it seems that the concern with enforcing the script setting on the coordinating node is that someone will be able to bypass it by making/faking a shard level transport request directly to the data nodes. While that will indeed work it falls into a much bigger list of evil things people could do if the use direct transport messages - these range from deleting data to replacing any stored script with another. I don't think it should influence our decision here. \n", "created_at": "2016-10-03T19:12:45Z" }, { "body": "Security absolutely should influence our decision here. Just because something else is already insecure doesn't mean we should feel free to add more insecurities. We should be working toward making the other things more secure.\n", "created_at": "2016-10-03T19:21:23Z" }, { "body": "Spoke with @bleskes just now. I better understand his arguments, but need some more time to process. We'll hop on zoom tomorrow to discuss a solution.\n", "created_at": "2016-10-03T19:48:16Z" }, { "body": "whatever we decide here, I think this must be fixed for 5.0 GA so if anybody is working on this please assign the issue. I will pick it up tomorrow if nobody did so\n", "created_at": "2016-10-04T07:36:49Z" }, { "body": "> I think this must be fixed for 5.0 GA\n\n++ . I think we are all in alignment on the fix for 5.0 (disable caching for scripts). The discussion is about future work.\n", "created_at": "2016-10-04T07:40:03Z" } ], "number": 20645, "title": "Caching of terms aggregation with stored script can cause stale results" }
{ "body": "We already have some mechanism in place that prevents requests from being cached in the request cache if they use now(). Yet we don't have any streamlined way to assert that we are not accessing it later. We found some issues in #20645 that relate to stored scripts or scripts that are not pure functions. The immediate fix is to disable caching for these scripts. Unfortunately the script access wasn't streamlined in aggregations nor in query parsing / creation. This change adds a contained API that allows up to make cachabiltily decisions in a single place and causes requests to fail if they access scripts or `now()` after we cache the request.\n\nRelates to #20645\n", "number": 20750, "review_comments": [ { "body": "This doesn't actually execute right? It is the binding that happens immediately (vs lazy below is binding to params later).\n", "created_at": "2016-10-05T19:40:59Z" }, { "body": "yeah gotta fix this\n", "created_at": "2016-10-05T20:37:52Z" }, { "body": "Can you add some minimal javadocs?\n", "created_at": "2016-10-05T22:43:16Z" }, { "body": "since it's a test case, should it use the junit assertions instead of the assert keyword?\n", "created_at": "2016-10-05T22:58:32Z" }, { "body": "fixed\n", "created_at": "2016-10-06T08:06:02Z" }, { "body": "fixed\n", "created_at": "2016-10-06T08:06:08Z" }, { "body": "fixed\n", "created_at": "2016-10-06T08:06:12Z" } ], "title": "Prevent requests that use scripts or now() from being cached" }
{ "commits": [ { "message": "Add a #markAsNotCachable() method to context to mark requests as not cachable" }, { "message": "Tests to make sure markAsNotCacheable() works when scripts are used" }, { "message": "Merge branch 'master' into dont_cache_scripts" }, { "message": "add extra safety when accessing scripts or now and reqeusts are cached" }, { "message": "move extended bounds parse and validate to date hitso factory" }, { "message": "move extended bounds rounding to date histo agg builder" }, { "message": "fix check style errors" }, { "message": "Make getter for bulk shard requests items visible (#20743)" }, { "message": "Clarify wording for the strict REST params message\n\nThis commit changes the strict REST parameters message to say that\r\nunconsumed parameters are unrecognized rather than unused. Additionally,\r\nthe test is beefed up to include two unused parameters.\r\n\r\nRelates #20745" }, { "message": "Add did you mean to strict REST params\n\nThis commit adds a did you mean feature to the strict REST params error\r\nmessage. This works by comparing any unconsumed parameters to all of the\r\nconsumer parameters, comparing the Levenstein distance between those\r\nparameters, and taking any consumed parameters that are close to an\r\nunconsumed parameter as candiates for the did you mean.\r\n\r\n* Fix pluralization in strict REST params message\r\n\r\nThis commit fixes the pluralization in the strict REST parameters error\r\nmessage so that the word \"parameter\" is not unconditionally written as\r\n\"parameters\" even when there is only one unrecognized parameter.\r\n\r\n* Strength strict REST params did you mean test\r\n\r\nThis commit adds an unconsumed parameter that is too far from every\r\nconsumed parameter to have any candidate suggestions.\r\n\r\nRelates #20747" }, { "message": "ingest: Upgrade geoip2 dependency\n\nCloses #20563" }, { "message": "Fix date_range aggregation to not cache if now is used\n\nBefore this change the processing of the ranges in the date range (and\nother range type) aggregations was done when the Aggregator was created.\nThis meant that the SearchContext did not know that now had been used in\na range until after the decision to cache was made.\n\nThis change moves the processing of the ranges to the aggregation builders\nso that the search context is made aware that now has been used before\nit decides if the request should be cached" }, { "message": "fix random score function builder to deal with empty seeds" }, { "message": "use a private rewrite context to prevent exposing isCachable" }, { "message": "Merge branch 'master' into dont_cache_scripts" }, { "message": "clone the entire serach context for rewriting" }, { "message": "cleanup freeze methods and move them down to QueryShardContext" }, { "message": "Fix percolator queries to not be cacheable" }, { "message": "PercolateQuery is never cacheable" }, { "message": "add percolate with script query test" }, { "message": "Merge branch 'master' into dont_cache_scripts" }, { "message": "move test to a single node test" }, { "message": "Review comments" }, { "message": "Merge branch 'master' into dont_cache_scripts" } ], "files": [ { "diff": "@@ -89,6 +89,7 @@\n import java.util.concurrent.ScheduledFuture;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.function.LongSupplier;\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.unmodifiableMap;\n@@ -448,13 +449,13 @@ public IndexSettings getIndexSettings() {\n * Creates a new QueryShardContext. The context has not types set yet, if types are required set them via\n * {@link QueryShardContext#setTypes(String...)}\n */\n- public QueryShardContext newQueryShardContext(IndexReader indexReader) {\n+ public QueryShardContext newQueryShardContext(IndexReader indexReader, LongSupplier nowInMillis) {\n return new QueryShardContext(\n indexSettings, indexCache.bitsetFilterCache(), indexFieldData, mapperService(),\n similarityService(), nodeServicesProvider.getScriptService(), nodeServicesProvider.getIndicesQueriesRegistry(),\n nodeServicesProvider.getClient(), indexReader,\n- nodeServicesProvider.getClusterService().state()\n- );\n+ nodeServicesProvider.getClusterService().state(),\n+ nowInMillis);\n }\n \n /**\n@@ -463,7 +464,7 @@ public QueryShardContext newQueryShardContext(IndexReader indexReader) {\n * used for rewriting since it does not know about the current {@link IndexReader}.\n */\n public QueryShardContext newQueryShardContext() {\n- return newQueryShardContext(null);\n+ return newQueryShardContext(null, threadPool::estimatedTimeInMillis);\n }\n \n public ThreadPool getThreadPool() {", "filename": "core/src/main/java/org/elasticsearch/index/IndexService.java", "status": "modified" }, { "diff": "@@ -366,7 +366,7 @@ private static Callable<Long> now() {\n return () -> {\n final SearchContext context = SearchContext.current();\n return context != null\n- ? context.nowInMillis()\n+ ? context.getQueryShardContext().nowInMillis()\n : System.currentTimeMillis();\n };\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -480,7 +480,7 @@ private static Callable<Long> now() {\n public Long call() {\n final SearchContext context = SearchContext.current();\n return context != null\n- ? context.nowInMillis()\n+ ? context.getQueryShardContext().nowInMillis()\n : System.currentTimeMillis();\n }\n };", "filename": "core/src/main/java/org/elasticsearch/index/mapper/LegacyDateFieldMapper.java", "status": "modified" }, { "diff": "@@ -143,7 +143,7 @@ public Object valueForSearch(Object value) {\n long now;\n SearchContext searchContext = SearchContext.current();\n if (searchContext != null) {\n- now = searchContext.nowInMillis();\n+ now = searchContext.getQueryShardContext().nowInMillis();\n } else {\n now = System.currentTimeMillis();\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/TTLFieldMapper.java", "status": "modified" }, { "diff": "@@ -575,8 +575,8 @@ private void setupInnerHitsContext(QueryShardContext context, InnerHitsContext.B\n }\n if (scriptFields != null) {\n for (ScriptField field : scriptFields) {\n- SearchScript searchScript = innerHitsContext.scriptService().search(innerHitsContext.lookup(), field.script(),\n- ScriptContext.Standard.SEARCH, Collections.emptyMap());\n+ SearchScript searchScript = innerHitsContext.getQueryShardContext().getSearchScript(field.script(),\n+ ScriptContext.Standard.SEARCH, Collections.emptyMap());\n innerHitsContext.scriptFields().add(new org.elasticsearch.search.fetch.subphase.ScriptFieldsContext.ScriptField(\n field.fieldName(), searchScript, field.ignoreFailure()));\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java", "status": "modified" }, { "diff": "@@ -19,17 +19,24 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.util.SetOnce;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.ParseFieldMatcherSupplier;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.indices.query.IndicesQueriesRegistry;\n+import org.elasticsearch.script.ExecutableScript;\n+import org.elasticsearch.script.Script;\n+import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.script.ScriptSettings;\n \n+import java.util.Collections;\n+\n /**\n * Context object used to rewrite {@link QueryBuilder} instances into simplified version.\n */\n@@ -69,13 +76,6 @@ public final IndexSettings getIndexSettings() {\n return indexSettings;\n }\n \n- /**\n- * Returns a script service to fetch scripts.\n- */\n- public final ScriptService getScriptService() {\n- return scriptService;\n- }\n-\n /**\n * Return the MapperService.\n */\n@@ -116,4 +116,12 @@ public QueryParseContext newParseContextWithLegacyScriptLanguage(XContentParser\n String defaultScriptLanguage = ScriptSettings.getLegacyDefaultLang(indexSettings.getNodeSettings());\n return new QueryParseContext(defaultScriptLanguage, indicesQueriesRegistry, parser, indexSettings.getParseFieldMatcher());\n }\n+\n+ public BytesReference getTemplateBytes(Script template) {\n+ ExecutableScript executable = scriptService.executable(template,\n+ ScriptContext.Standard.SEARCH, Collections.emptyMap());\n+ return (BytesReference) executable.run();\n+ }\n+\n+\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryRewriteContext.java", "status": "modified" }, { "diff": "@@ -26,18 +26,23 @@\n import java.util.Collection;\n import java.util.HashMap;\n import java.util.Map;\n+import java.util.function.Function;\n+import java.util.function.LongSupplier;\n+\n import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.queryparser.classic.MapperQueryParser;\n import org.apache.lucene.queryparser.classic.QueryParserSettings;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.join.BitSetProducer;\n import org.apache.lucene.search.similarities.Similarity;\n+import org.apache.lucene.util.SetOnce;\n import org.elasticsearch.Version;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n@@ -54,7 +59,12 @@\n import org.elasticsearch.index.query.support.NestedScope;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.indices.query.IndicesQueriesRegistry;\n+import org.elasticsearch.script.CompiledScript;\n+import org.elasticsearch.script.ExecutableScript;\n+import org.elasticsearch.script.Script;\n+import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.script.SearchScript;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.lookup.SearchLookup;\n \n@@ -69,6 +79,8 @@ public class QueryShardContext extends QueryRewriteContext {\n private final IndexFieldDataService indexFieldDataService;\n private final IndexSettings indexSettings;\n private String[] types = Strings.EMPTY_ARRAY;\n+ private boolean cachable = true;\n+ private final SetOnce<Boolean> frozen = new SetOnce<>();\n \n public void setTypes(String... types) {\n this.types = types;\n@@ -85,11 +97,12 @@ public String[] getTypes() {\n private boolean mapUnmappedFieldAsString;\n private NestedScope nestedScope;\n private boolean isFilter;\n+ private final LongSupplier nowInMillis;\n \n public QueryShardContext(IndexSettings indexSettings, BitsetFilterCache bitsetFilterCache, IndexFieldDataService indexFieldDataService,\n MapperService mapperService, SimilarityService similarityService, ScriptService scriptService,\n final IndicesQueriesRegistry indicesQueriesRegistry, Client client,\n- IndexReader reader, ClusterState clusterState) {\n+ IndexReader reader, ClusterState clusterState, LongSupplier nowInMillis) {\n super(indexSettings, mapperService, scriptService, indicesQueriesRegistry, client, reader, clusterState);\n this.indexSettings = indexSettings;\n this.similarityService = similarityService;\n@@ -99,12 +112,13 @@ public QueryShardContext(IndexSettings indexSettings, BitsetFilterCache bitsetFi\n this.allowUnmappedFields = indexSettings.isDefaultAllowUnmappedFields();\n this.indicesQueriesRegistry = indicesQueriesRegistry;\n this.nestedScope = new NestedScope();\n+ this.nowInMillis = nowInMillis;\n }\n \n public QueryShardContext(QueryShardContext source) {\n this(source.indexSettings, source.bitsetFilterCache, source.indexFieldDataService, source.mapperService,\n source.similarityService, source.scriptService, source.indicesQueriesRegistry, source.client,\n- source.reader, source.clusterState);\n+ source.reader, source.clusterState, source.nowInMillis);\n this.types = source.getTypes();\n }\n \n@@ -261,11 +275,8 @@ public SearchLookup lookup() {\n }\n \n public long nowInMillis() {\n- SearchContext current = SearchContext.current();\n- if (current != null) {\n- return current.nowInMillis();\n- }\n- return System.currentTimeMillis();\n+ failIfFrozen();\n+ return nowInMillis.getAsLong();\n }\n \n public NestedScope nestedScope() {\n@@ -327,4 +338,77 @@ private ParsedQuery toQuery(QueryBuilder queryBuilder, CheckedFunction<QueryBuil\n public final Index index() {\n return indexSettings.getIndex();\n }\n+\n+ /**\n+ * Compiles (or retrieves from cache) and binds the parameters to the\n+ * provided script\n+ */\n+ public SearchScript getSearchScript(Script script, ScriptContext context, Map<String, String> params) {\n+ failIfFrozen();\n+ return scriptService.search(lookup(), script, context, params);\n+ }\n+ /**\n+ * Returns a lazily created {@link SearchScript} that is compiled immediately but can be pulled later once all\n+ * parameters are available.\n+ */\n+ public Function<Map<String, Object>, SearchScript> getLazySearchScript(Script script, ScriptContext context,\n+ Map<String, String> params) {\n+ failIfFrozen();\n+ CompiledScript compile = scriptService.compile(script, context, params);\n+ return (p) -> scriptService.search(lookup(), compile, p);\n+ }\n+\n+ /**\n+ * Compiles (or retrieves from cache) and binds the parameters to the\n+ * provided script\n+ */\n+ public ExecutableScript getExecutableScript(Script script, ScriptContext context, Map<String, String> params) {\n+ failIfFrozen();\n+ return scriptService.executable(script, context, params);\n+ }\n+\n+ /**\n+ * Returns a lazily created {@link ExecutableScript} that is compiled immediately but can be pulled later once all\n+ * parameters are available.\n+ */\n+ public Function<Map<String, Object>, ExecutableScript> getLazyExecutableScript(Script script, ScriptContext context,\n+ Map<String, String> params) {\n+ failIfFrozen();\n+ CompiledScript executable = scriptService.compile(script, context, params);\n+ return (p) -> scriptService.executable(executable, p);\n+ }\n+\n+ /**\n+ * if this method is called the query context will throw exception if methods are accessed\n+ * that could yield different results across executions like {@link #getTemplateBytes(Script)}\n+ */\n+ public void freezeContext() {\n+ this.frozen.set(Boolean.TRUE);\n+ }\n+\n+ /**\n+ * This method fails if {@link #freezeContext()} is called before on this context.\n+ * This is used to <i>seal</i>\n+ */\n+ protected void failIfFrozen() {\n+ this.cachable = false;\n+ if (frozen.get() == Boolean.TRUE) {\n+ throw new IllegalArgumentException(\"features that prevent cachability are disabled on this context\");\n+ } else {\n+ assert frozen.get() == null : frozen.get();\n+ }\n+ }\n+\n+ @Override\n+ public BytesReference getTemplateBytes(Script template) {\n+ failIfFrozen();\n+ return super.getTemplateBytes(template);\n+ }\n+\n+ /**\n+ * Returns <code>true</code> iff the result of the processed search request is cachable. Otherwise <code>false</code>\n+ */\n+ public boolean isCachable() {\n+ return cachable;\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java", "status": "modified" }, { "diff": "@@ -41,7 +41,6 @@\n \n import java.io.IOException;\n import java.util.Collections;\n-import java.util.Map;\n import java.util.Objects;\n import java.util.Optional;\n \n@@ -134,7 +133,7 @@ public static Optional<ScriptQueryBuilder> fromXContent(QueryParseContext parseC\n \n @Override\n protected Query doToQuery(QueryShardContext context) throws IOException {\n- return new ScriptQuery(script, context.getScriptService(), context.lookup());\n+ return new ScriptQuery(script, context.getSearchScript(script, ScriptContext.Standard.SEARCH, Collections.emptyMap()));\n }\n \n static class ScriptQuery extends Query {\n@@ -143,9 +142,9 @@ static class ScriptQuery extends Query {\n \n private final SearchScript searchScript;\n \n- public ScriptQuery(Script script, ScriptService scriptService, SearchLookup searchLookup) {\n+ public ScriptQuery(Script script, SearchScript searchScript) {\n this.script = script;\n- this.searchScript = scriptService.search(searchLookup, script, ScriptContext.Standard.SEARCH, Collections.emptyMap());\n+ this.searchScript = searchScript;\n }\n \n @Override\n@@ -216,4 +215,6 @@ protected int doHashCode() {\n protected boolean doEquals(ScriptQueryBuilder other) {\n return Objects.equals(script, other.script);\n }\n+\n+\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java", "status": "modified" }, { "diff": "@@ -49,12 +49,19 @@ public RandomScoreFunctionBuilder() {\n */\n public RandomScoreFunctionBuilder(StreamInput in) throws IOException {\n super(in);\n- seed = in.readInt();\n+ if (in.readBoolean()) {\n+ seed = in.readInt();\n+ }\n }\n \n @Override\n protected void doWriteTo(StreamOutput out) throws IOException {\n- out.writeInt(seed);\n+ if (seed != null) {\n+ out.writeBoolean(true);\n+ out.writeInt(seed);\n+ } else {\n+ out.writeBoolean(false);\n+ }\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/index/query/functionscore/RandomScoreFunctionBuilder.java", "status": "modified" }, { "diff": "@@ -96,8 +96,7 @@ protected int doHashCode() {\n @Override\n protected ScoreFunction doToFunction(QueryShardContext context) {\n try {\n- SearchScript searchScript = context.getScriptService().search(context.lookup(), script, ScriptContext.Standard.SEARCH,\n- Collections.emptyMap());\n+ SearchScript searchScript = context.getSearchScript(script, ScriptContext.Standard.SEARCH, Collections.emptyMap());\n return new ScriptScoreFunction(script, searchScript);\n } catch (Exception e) {\n throw new QueryShardException(context, \"script_score: the script could not be loaded\", e);", "filename": "core/src/main/java/org/elasticsearch/index/query/functionscore/ScriptScoreFunctionBuilder.java", "status": "modified" }, { "diff": "@@ -1082,7 +1082,7 @@ public boolean canCache(ShardSearchRequest request, SearchContext context) {\n }\n // if now in millis is used (or in the future, a more generic \"isDeterministic\" flag\n // then we can't cache based on \"now\" key within the search request, as it is not deterministic\n- if (context.nowInMillisUsed()) {\n+ if (context.getQueryShardContext().isCachable() == false) {\n return false;\n }\n return true;", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -43,7 +43,6 @@\n */\n public final class Script implements ToXContent, Writeable {\n \n- public static final ScriptType DEFAULT_TYPE = ScriptType.INLINE;\n public static final String DEFAULT_SCRIPT_LANG = \"painless\";\n \n private String script;", "filename": "core/src/main/java/org/elasticsearch/script/Script.java", "status": "modified" }, { "diff": "@@ -249,7 +249,7 @@ void checkCompilationLimit() {\n long timePassed = now - lastInlineCompileTime;\n lastInlineCompileTime = now;\n \n- scriptsPerMinCounter += ((double) timePassed) * compilesAllowedPerNano;\n+ scriptsPerMinCounter += (timePassed) * compilesAllowedPerNano;\n \n // It's been over the time limit anyway, readjust the bucket to be level\n if (scriptsPerMinCounter > totalCompilesPerMinute) {\n@@ -488,7 +488,15 @@ public ExecutableScript executable(CompiledScript compiledScript, Map<String, Ob\n */\n public SearchScript search(SearchLookup lookup, Script script, ScriptContext scriptContext, Map<String, String> params) {\n CompiledScript compiledScript = compile(script, scriptContext, params);\n- return getScriptEngineServiceForLang(compiledScript.lang()).search(compiledScript, lookup, script.getParams());\n+ return search(lookup, compiledScript, script.getParams());\n+ }\n+\n+ /**\n+ * Binds provided parameters to a compiled script returning a\n+ * {@link SearchScript} ready for execution\n+ */\n+ public SearchScript search(SearchLookup lookup, CompiledScript compiledScript, Map<String, Object> params) {\n+ return getScriptEngineServiceForLang(compiledScript.lang()).search(compiledScript, lookup, params);\n }\n \n private boolean isAnyScriptContextEnabled(String lang, ScriptType scriptType) {", "filename": "core/src/main/java/org/elasticsearch/script/ScriptService.java", "status": "modified" }, { "diff": "@@ -51,7 +51,6 @@\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.similarity.SimilarityService;\n-import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.aggregations.SearchContextAggregations;\n import org.elasticsearch.search.dfs.DfsSearchResult;\n import org.elasticsearch.search.fetch.FetchPhase;\n@@ -89,7 +88,6 @@ final class DefaultSearchContext extends SearchContext {\n private final Counter timeEstimateCounter;\n private SearchType searchType;\n private final Engine.Searcher engineSearcher;\n- private final ScriptService scriptService;\n private final BigArrays bigArrays;\n private final IndexShard indexShard;\n private final IndexService indexService;\n@@ -150,17 +148,16 @@ final class DefaultSearchContext extends SearchContext {\n private FetchPhase fetchPhase;\n \n DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget, Engine.Searcher engineSearcher,\n- IndexService indexService, IndexShard indexShard, ScriptService scriptService,\n- BigArrays bigArrays, Counter timeEstimateCounter, ParseFieldMatcher parseFieldMatcher, TimeValue timeout,\n- FetchPhase fetchPhase) {\n+ IndexService indexService, IndexShard indexShard,\n+ BigArrays bigArrays, Counter timeEstimateCounter, ParseFieldMatcher parseFieldMatcher, TimeValue timeout,\n+ FetchPhase fetchPhase) {\n super(parseFieldMatcher);\n this.id = id;\n this.request = request;\n this.fetchPhase = fetchPhase;\n this.searchType = request.searchType();\n this.shardTarget = shardTarget;\n this.engineSearcher = engineSearcher;\n- this.scriptService = scriptService;\n // SearchContexts use a BigArrays that can circuit break\n this.bigArrays = bigArrays.withCircuitBreaking();\n this.dfsResult = new DfsSearchResult(id, shardTarget);\n@@ -171,10 +168,17 @@ final class DefaultSearchContext extends SearchContext {\n this.searcher = new ContextIndexSearcher(engineSearcher, indexService.cache().query(), indexShard.getQueryCachingPolicy());\n this.timeEstimateCounter = timeEstimateCounter;\n this.timeout = timeout;\n- queryShardContext = indexService.newQueryShardContext(searcher.getIndexReader());\n+ queryShardContext = indexService.newQueryShardContext(searcher.getIndexReader(), request::nowInMillis);\n queryShardContext.setTypes(request.types());\n }\n \n+ DefaultSearchContext(DefaultSearchContext source) {\n+ this(source.id(), source.request(), source.shardTarget(), source.engineSearcher, source.indexService, source.indexShard(),\n+ source.bigArrays(), source.timeEstimateCounter(), source.parseFieldMatcher(), source.timeout(), source.fetchPhase());\n+ }\n+\n+\n+\n @Override\n public void doClose() {\n // clear and scope phase we have\n@@ -358,11 +362,6 @@ public long getOriginNanoTime() {\n return originNanoTime;\n }\n \n- @Override\n- protected long nowInMillisImpl() {\n- return request.nowInMillis();\n- }\n-\n @Override\n public ScrollContext scrollContext() {\n return this.scrollContext;\n@@ -501,11 +500,6 @@ public SimilarityService similarityService() {\n return indexService.similarityService();\n }\n \n- @Override\n- public ScriptService scriptService() {\n- return scriptService;\n- }\n-\n @Override\n public BigArrays bigArrays() {\n return bigArrays;", "filename": "core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java", "status": "modified" }, { "diff": "@@ -231,6 +231,7 @@ public DfsSearchResult executeDfsPhase(ShardSearchRequest request) throws IOExce\n */\n private void loadOrExecuteQueryPhase(final ShardSearchRequest request, final SearchContext context) throws Exception {\n final boolean canCache = indicesService.canCache(request, context);\n+ context.getQueryShardContext().freezeContext();\n if (canCache) {\n indicesService.loadIntoContext(request, context, queryPhase);\n } else {\n@@ -516,17 +517,18 @@ final SearchContext createAndPutContext(ShardSearchRequest request) throws IOExc\n }\n \n final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.Searcher searcher) throws IOException {\n-\n- DefaultSearchContext context = createSearchContext(request, defaultSearchTimeout, searcher);\n- SearchContext.setCurrent(context);\n+ final DefaultSearchContext context = createSearchContext(request, defaultSearchTimeout, searcher);\n try {\n- request.rewrite(context.getQueryShardContext());\n- // reset that we have used nowInMillis from the context since it may\n- // have been rewritten so its no longer in the query and the request can\n- // be cached. If it is still present in the request (e.g. in a range\n- // aggregation) it will still be caught when the aggregation is\n- // evaluated.\n- context.resetNowInMillisUsed();\n+ // we clone the search context here just for rewriting otherwise we\n+ // might end up with incorrect state since we are using now() or script services\n+ // during rewrite and normalized / evaluate templates etc.\n+ // NOTE this context doesn't need to be closed - the outer context will\n+ // take care of this.\n+ DefaultSearchContext rewriteContext = new DefaultSearchContext(context);\n+ SearchContext.setCurrent(rewriteContext);\n+ request.rewrite(rewriteContext.getQueryShardContext());\n+ SearchContext.setCurrent(context);\n+ assert context.getQueryShardContext().isCachable();\n if (request.scroll() != null) {\n context.scrollContext(new ScrollContext());\n context.scrollContext().scroll = request.scroll();\n@@ -568,7 +570,7 @@ public DefaultSearchContext createSearchContext(ShardSearchRequest request, Time\n \n return new DefaultSearchContext(idGenerator.incrementAndGet(), request, shardTarget, engineSearcher,\n indexService,\n- indexShard, scriptService, bigArrays, threadPool.estimatedTimeInMillisCounter(), parseFieldMatcher,\n+ indexShard, bigArrays, threadPool.estimatedTimeInMillisCounter(), parseFieldMatcher,\n timeout, fetchPhase);\n }\n \n@@ -735,7 +737,7 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc\n }\n if (source.scriptFields() != null) {\n for (org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField field : source.scriptFields()) {\n- SearchScript searchScript = context.scriptService().search(context.lookup(), field.script(), ScriptContext.Standard.SEARCH,\n+ SearchScript searchScript = scriptService.search(context.lookup(), field.script(), ScriptContext.Standard.SEARCH,\n Collections.emptyMap());\n context.scriptFields().add(new ScriptField(field.fieldName(), searchScript, field.ignoreFailure()));\n }", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -21,6 +21,8 @@\n \n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.rounding.DateTimeUnit;\n+import org.elasticsearch.common.rounding.Rounding;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n@@ -35,15 +37,42 @@\n import org.elasticsearch.search.aggregations.support.ValuesSourceType;\n \n import java.io.IOException;\n+import java.util.HashMap;\n+import java.util.Map;\n import java.util.Objects;\n \n+import static java.util.Collections.unmodifiableMap;\n+\n /**\n * A builder for histograms on date fields.\n */\n public class DateHistogramAggregationBuilder\n extends ValuesSourceAggregationBuilder<ValuesSource.Numeric, DateHistogramAggregationBuilder> {\n public static final String NAME = InternalDateHistogram.TYPE.name();\n \n+ public static final Map<String, DateTimeUnit> DATE_FIELD_UNITS;\n+\n+ static {\n+ Map<String, DateTimeUnit> dateFieldUnits = new HashMap<>();\n+ dateFieldUnits.put(\"year\", DateTimeUnit.YEAR_OF_CENTURY);\n+ dateFieldUnits.put(\"1y\", DateTimeUnit.YEAR_OF_CENTURY);\n+ dateFieldUnits.put(\"quarter\", DateTimeUnit.QUARTER);\n+ dateFieldUnits.put(\"1q\", DateTimeUnit.QUARTER);\n+ dateFieldUnits.put(\"month\", DateTimeUnit.MONTH_OF_YEAR);\n+ dateFieldUnits.put(\"1M\", DateTimeUnit.MONTH_OF_YEAR);\n+ dateFieldUnits.put(\"week\", DateTimeUnit.WEEK_OF_WEEKYEAR);\n+ dateFieldUnits.put(\"1w\", DateTimeUnit.WEEK_OF_WEEKYEAR);\n+ dateFieldUnits.put(\"day\", DateTimeUnit.DAY_OF_MONTH);\n+ dateFieldUnits.put(\"1d\", DateTimeUnit.DAY_OF_MONTH);\n+ dateFieldUnits.put(\"hour\", DateTimeUnit.HOUR_OF_DAY);\n+ dateFieldUnits.put(\"1h\", DateTimeUnit.HOUR_OF_DAY);\n+ dateFieldUnits.put(\"minute\", DateTimeUnit.MINUTES_OF_HOUR);\n+ dateFieldUnits.put(\"1m\", DateTimeUnit.MINUTES_OF_HOUR);\n+ dateFieldUnits.put(\"second\", DateTimeUnit.SECOND_OF_MINUTE);\n+ dateFieldUnits.put(\"1s\", DateTimeUnit.SECOND_OF_MINUTE);\n+ DATE_FIELD_UNITS = unmodifiableMap(dateFieldUnits);\n+ }\n+\n private long interval;\n private DateHistogramInterval dateHistogramInterval;\n private long offset = 0;\n@@ -245,8 +274,36 @@ public String getWriteableName() {\n @Override\n protected ValuesSourceAggregatorFactory<Numeric, ?> innerBuild(AggregationContext context, ValuesSourceConfig<Numeric> config,\n AggregatorFactory<?> parent, Builder subFactoriesBuilder) throws IOException {\n+ Rounding rounding = createRounding();\n+ ExtendedBounds roundedBounds = null;\n+ if (this.extendedBounds != null) {\n+ // parse any string bounds to longs and round\n+ roundedBounds = this.extendedBounds.parseAndValidate(name, context.searchContext(), config.format()).round(rounding);\n+ }\n return new DateHistogramAggregatorFactory(name, type, config, interval, dateHistogramInterval, offset, order, keyed, minDocCount,\n- extendedBounds, context, parent, subFactoriesBuilder, metaData);\n+ rounding, roundedBounds, context, parent, subFactoriesBuilder, metaData);\n+ }\n+\n+ private Rounding createRounding() {\n+ Rounding.Builder tzRoundingBuilder;\n+ if (dateHistogramInterval != null) {\n+ DateTimeUnit dateTimeUnit = DATE_FIELD_UNITS.get(dateHistogramInterval.toString());\n+ if (dateTimeUnit != null) {\n+ tzRoundingBuilder = Rounding.builder(dateTimeUnit);\n+ } else {\n+ // the interval is a time value?\n+ tzRoundingBuilder = Rounding.builder(\n+ TimeValue.parseTimeValue(dateHistogramInterval.toString(), null, getClass().getSimpleName() + \".interval\"));\n+ }\n+ } else {\n+ // the interval is an integer time value in millis?\n+ tzRoundingBuilder = Rounding.builder(TimeValue.timeValueMillis(interval));\n+ }\n+ if (timeZone() != null) {\n+ tzRoundingBuilder.timeZone(timeZone());\n+ }\n+ Rounding rounding = tzRoundingBuilder.build();\n+ return rounding;\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -19,9 +19,7 @@\n \n package org.elasticsearch.search.aggregations.bucket.histogram;\n \n-import org.elasticsearch.common.rounding.DateTimeUnit;\n import org.elasticsearch.common.rounding.Rounding;\n-import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n@@ -30,12 +28,9 @@\n import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric;\n \n import java.io.IOException;\n-import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n \n-import static java.util.Collections.unmodifiableMap;\n-\n import org.elasticsearch.search.aggregations.support.AggregationContext;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;\n@@ -44,40 +39,18 @@\n public final class DateHistogramAggregatorFactory\n extends ValuesSourceAggregatorFactory<ValuesSource.Numeric, DateHistogramAggregatorFactory> {\n \n- public static final Map<String, DateTimeUnit> DATE_FIELD_UNITS;\n-\n- static {\n- Map<String, DateTimeUnit> dateFieldUnits = new HashMap<>();\n- dateFieldUnits.put(\"year\", DateTimeUnit.YEAR_OF_CENTURY);\n- dateFieldUnits.put(\"1y\", DateTimeUnit.YEAR_OF_CENTURY);\n- dateFieldUnits.put(\"quarter\", DateTimeUnit.QUARTER);\n- dateFieldUnits.put(\"1q\", DateTimeUnit.QUARTER);\n- dateFieldUnits.put(\"month\", DateTimeUnit.MONTH_OF_YEAR);\n- dateFieldUnits.put(\"1M\", DateTimeUnit.MONTH_OF_YEAR);\n- dateFieldUnits.put(\"week\", DateTimeUnit.WEEK_OF_WEEKYEAR);\n- dateFieldUnits.put(\"1w\", DateTimeUnit.WEEK_OF_WEEKYEAR);\n- dateFieldUnits.put(\"day\", DateTimeUnit.DAY_OF_MONTH);\n- dateFieldUnits.put(\"1d\", DateTimeUnit.DAY_OF_MONTH);\n- dateFieldUnits.put(\"hour\", DateTimeUnit.HOUR_OF_DAY);\n- dateFieldUnits.put(\"1h\", DateTimeUnit.HOUR_OF_DAY);\n- dateFieldUnits.put(\"minute\", DateTimeUnit.MINUTES_OF_HOUR);\n- dateFieldUnits.put(\"1m\", DateTimeUnit.MINUTES_OF_HOUR);\n- dateFieldUnits.put(\"second\", DateTimeUnit.SECOND_OF_MINUTE);\n- dateFieldUnits.put(\"1s\", DateTimeUnit.SECOND_OF_MINUTE);\n- DATE_FIELD_UNITS = unmodifiableMap(dateFieldUnits);\n- }\n-\n private final DateHistogramInterval dateHistogramInterval;\n private final long interval;\n private final long offset;\n private final InternalOrder order;\n private final boolean keyed;\n private final long minDocCount;\n private final ExtendedBounds extendedBounds;\n+ private Rounding rounding;\n \n public DateHistogramAggregatorFactory(String name, Type type, ValuesSourceConfig<Numeric> config, long interval,\n DateHistogramInterval dateHistogramInterval, long offset, InternalOrder order, boolean keyed, long minDocCount,\n- ExtendedBounds extendedBounds, AggregationContext context, AggregatorFactory<?> parent,\n+ Rounding rounding, ExtendedBounds extendedBounds, AggregationContext context, AggregatorFactory<?> parent,\n AggregatorFactories.Builder subFactoriesBuilder, Map<String, Object> metaData) throws IOException {\n super(name, type, config, context, parent, subFactoriesBuilder, metaData);\n this.interval = interval;\n@@ -87,34 +60,13 @@ public DateHistogramAggregatorFactory(String name, Type type, ValuesSourceConfig\n this.keyed = keyed;\n this.minDocCount = minDocCount;\n this.extendedBounds = extendedBounds;\n+ this.rounding = rounding;\n }\n \n public long minDocCount() {\n return minDocCount;\n }\n \n- private Rounding createRounding() {\n- Rounding.Builder tzRoundingBuilder;\n- if (dateHistogramInterval != null) {\n- DateTimeUnit dateTimeUnit = DATE_FIELD_UNITS.get(dateHistogramInterval.toString());\n- if (dateTimeUnit != null) {\n- tzRoundingBuilder = Rounding.builder(dateTimeUnit);\n- } else {\n- // the interval is a time value?\n- tzRoundingBuilder = Rounding.builder(\n- TimeValue.parseTimeValue(dateHistogramInterval.toString(), null, getClass().getSimpleName() + \".interval\"));\n- }\n- } else {\n- // the interval is an integer time value in millis?\n- tzRoundingBuilder = Rounding.builder(TimeValue.timeValueMillis(interval));\n- }\n- if (timeZone() != null) {\n- tzRoundingBuilder.timeZone(timeZone());\n- }\n- Rounding rounding = tzRoundingBuilder.build();\n- return rounding;\n- }\n-\n @Override\n protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, Aggregator parent, boolean collectsFromSingleBucket,\n List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) throws IOException {\n@@ -126,18 +78,7 @@ protected Aggregator doCreateInternal(ValuesSource.Numeric valuesSource, Aggrega\n \n private Aggregator createAggregator(ValuesSource.Numeric valuesSource, Aggregator parent, List<PipelineAggregator> pipelineAggregators,\n Map<String, Object> metaData) throws IOException {\n- Rounding rounding = createRounding();\n- // we need to round the bounds given by the user and we have to do it\n- // for every aggregator we create\n- // as the rounding is not necessarily an idempotent operation.\n- // todo we need to think of a better structure to the factory/agtor\n- // code so we won't need to do that\n- ExtendedBounds roundedBounds = null;\n- if (extendedBounds != null) {\n- // parse any string bounds to longs and round them\n- roundedBounds = extendedBounds.parseAndValidate(name, context.searchContext(), config.format()).round(rounding);\n- }\n- return new DateHistogramAggregator(name, factories, rounding, offset, order, keyed, minDocCount, roundedBounds, valuesSource,\n+ return new DateHistogramAggregator(name, factories, rounding, offset, order, keyed, minDocCount, extendedBounds, valuesSource,\n config.format(), context, parent, pipelineAggregators, metaData);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregatorFactory.java", "status": "modified" }, { "diff": "@@ -153,11 +153,11 @@ ExtendedBounds parseAndValidate(String aggName, SearchContext context, DocValueF\n Long max = this.max;\n assert format != null;\n if (minAsStr != null) {\n- min = format.parseLong(minAsStr, false, context::nowInMillis);\n+ min = format.parseLong(minAsStr, false, context.getQueryShardContext()::nowInMillis);\n }\n if (maxAsStr != null) {\n // TODO: Should we rather pass roundUp=true?\n- max = format.parseLong(maxAsStr, false, context::nowInMillis);\n+ max = format.parseLong(maxAsStr, false, context.getQueryShardContext()::nowInMillis);\n }\n if (min != null && max != null && min.compareTo(max) > 0) {\n throw new SearchParseException(context, \"[extended_bounds.min][\" + min + \"] cannot be greater than \" +", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/ExtendedBounds.java", "status": "modified" }, { "diff": "@@ -118,10 +118,10 @@ public Range process(DocValueFormat parser, SearchContext context) {\n Double from = this.from;\n Double to = this.to;\n if (fromAsStr != null) {\n- from = parser.parseDouble(fromAsStr, false, context::nowInMillis);\n+ from = parser.parseDouble(fromAsStr, false, context.getQueryShardContext()::nowInMillis);\n }\n if (toAsStr != null) {\n- to = parser.parseDouble(toAsStr, false, context::nowInMillis);\n+ to = parser.parseDouble(toAsStr, false, context.getQueryShardContext()::nowInMillis);\n }\n return new Range(key, from, fromAsStr, to, toAsStr);\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java", "status": "modified" }, { "diff": "@@ -220,6 +220,7 @@ public SignificanceHeuristic significanceHeuristic() {\n @Override\n protected ValuesSourceAggregatorFactory<ValuesSource, ?> innerBuild(AggregationContext context, ValuesSourceConfig<ValuesSource> config,\n AggregatorFactory<?> parent, Builder subFactoriesBuilder) throws IOException {\n+ this.significanceHeuristic.initialize(context.searchContext());\n return new SignificantTermsAggregatorFactory(name, type, config, includeExclude, executionHint, filterBuilder,\n bucketCountThresholds, significanceHeuristic, context, parent, subFactoriesBuilder, metaData);\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -91,7 +91,6 @@ public SignificantTermsAggregatorFactory(String name, Type type, ValuesSourceCon\n : searcher.count(filter);\n this.bucketCountThresholds = bucketCountThresholds;\n this.significanceHeuristic = significanceHeuristic;\n- this.significanceHeuristic.initialize(context.searchContext());\n setFieldInfo();\n \n }\n@@ -211,13 +210,13 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, Aggregator pare\n }\n }\n assert execution != null;\n- \n+\n DocValueFormat format = config.format();\n if ((includeExclude != null) && (includeExclude.isRegexBased()) && format != DocValueFormat.RAW) {\n throw new AggregationExecutionException(\"Aggregation [\" + name + \"] cannot support regular expression style include/exclude \"\n + \"settings as they can only be applied to string fields. Use an array of values for include/exclude clauses\");\n }\n- \n+\n return execution.create(name, factories, valuesSource, format, bucketCountThresholds, includeExclude, context, parent,\n significanceHeuristic, this, pipelineAggregators, metaData);\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregatorFactory.java", "status": "modified" }, { "diff": "@@ -32,7 +32,6 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.Script.ScriptField;\n import org.elasticsearch.script.ScriptContext;\n-import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.support.XContentParseContext;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -49,7 +48,7 @@ public class ScriptHeuristic extends SignificanceHeuristic {\n private final LongAccessor subsetDfHolder;\n private final LongAccessor supersetDfHolder;\n private final Script script;\n- ExecutableScript searchScript = null;\n+ ExecutableScript executableScript = null;\n \n public ScriptHeuristic(Script script) {\n subsetSizeHolder = new LongAccessor();\n@@ -73,20 +72,20 @@ public void writeTo(StreamOutput out) throws IOException {\n \n @Override\n public void initialize(InternalAggregation.ReduceContext context) {\n- initialize(context.scriptService());\n+ initialize(context.scriptService().executable(script, ScriptContext.Standard.AGGS, Collections.emptyMap()));\n }\n \n @Override\n public void initialize(SearchContext context) {\n- initialize(context.scriptService());\n+ initialize(context.getQueryShardContext().getExecutableScript(script, ScriptContext.Standard.AGGS, Collections.emptyMap()));\n }\n \n- public void initialize(ScriptService scriptService) {\n- searchScript = scriptService.executable(script, ScriptContext.Standard.AGGS, Collections.emptyMap());\n- searchScript.setNextVar(\"_subset_freq\", subsetDfHolder);\n- searchScript.setNextVar(\"_subset_size\", subsetSizeHolder);\n- searchScript.setNextVar(\"_superset_freq\", supersetDfHolder);\n- searchScript.setNextVar(\"_superset_size\", supersetSizeHolder);\n+ public void initialize(ExecutableScript executableScript) {\n+ this.executableScript = executableScript;\n+ this.executableScript.setNextVar(\"_subset_freq\", subsetDfHolder);\n+ this.executableScript.setNextVar(\"_subset_size\", subsetSizeHolder);\n+ this.executableScript.setNextVar(\"_superset_freq\", supersetDfHolder);\n+ this.executableScript.setNextVar(\"_superset_size\", supersetSizeHolder);\n }\n \n /**\n@@ -100,7 +99,7 @@ public void initialize(ScriptService scriptService) {\n */\n @Override\n public double getScore(long subsetFreq, long subsetSize, long supersetFreq, long supersetSize) {\n- if (searchScript == null) {\n+ if (executableScript == null) {\n //In tests, wehn calling assertSearchResponse(..) the response is streamed one additional time with an arbitrary version, see assertVersionSerializable(..).\n // Now, for version before 1.5.0 the score is computed after streaming the response but for scripts the script does not exists yet.\n // assertSearchResponse() might therefore fail although there is no problem.\n@@ -112,7 +111,7 @@ public double getScore(long subsetFreq, long subsetSize, long supersetFreq, long\n supersetSizeHolder.value = supersetSize;\n subsetDfHolder.value = subsetFreq;\n supersetDfHolder.value = supersetFreq;\n- return ((Number) searchScript.run()).doubleValue();\n+ return ((Number) executableScript.run()).doubleValue();\n }\n \n @Override\n@@ -171,26 +170,6 @@ public static SignificanceHeuristic parse(XContentParseContext context)\n return new ScriptHeuristic(script);\n }\n \n- public static class ScriptHeuristicBuilder implements SignificanceHeuristicBuilder {\n-\n- private Script script = null;\n-\n- public ScriptHeuristicBuilder setScript(Script script) {\n- this.script = script;\n- return this;\n- }\n-\n- @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) throws IOException {\n- builder.startObject(NAME);\n- builder.field(ScriptField.SCRIPT.getPreferredName());\n- script.toXContent(builder, builderParams);\n- builder.endObject();\n- return builder;\n- }\n-\n- }\n-\n public final class LongAccessor extends Number {\n public long value;\n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/ScriptHeuristic.java", "status": "modified" }, { "diff": "@@ -26,18 +26,23 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryParseContext;\n+import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.script.ExecutableScript;\n import org.elasticsearch.script.Script;\n+import org.elasticsearch.script.ScriptContext;\n+import org.elasticsearch.script.SearchScript;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.InternalAggregation.Type;\n import org.elasticsearch.search.aggregations.support.AggregationContext;\n-\n import java.io.IOException;\n+import java.util.Collections;\n import java.util.HashSet;\n import java.util.Map;\n import java.util.Objects;\n import java.util.Set;\n+import java.util.function.Function;\n \n public class ScriptedMetricAggregationBuilder extends AbstractAggregationBuilder<ScriptedMetricAggregationBuilder> {\n \n@@ -182,10 +187,29 @@ public Map<String, Object> params() {\n @Override\n protected ScriptedMetricAggregatorFactory doBuild(AggregationContext context, AggregatorFactory<?> parent,\n Builder subfactoriesBuilder) throws IOException {\n- return new ScriptedMetricAggregatorFactory(name, type, initScript, mapScript, combineScript, reduceScript, params, context,\n- parent, subfactoriesBuilder, metaData);\n+\n+ QueryShardContext queryShardContext = context.searchContext().getQueryShardContext();\n+ Function<Map<String, Object>, ExecutableScript> executableInitScript;\n+ if (initScript != null) {\n+ executableInitScript = queryShardContext.getLazyExecutableScript(initScript, ScriptContext.Standard.AGGS,\n+ Collections.emptyMap());\n+ } else {\n+ executableInitScript = (p) -> null;;\n+ }\n+ Function<Map<String, Object>, SearchScript> searchMapScript = queryShardContext.getLazySearchScript(mapScript,\n+ ScriptContext.Standard.AGGS, Collections.emptyMap());\n+ Function<Map<String, Object>, ExecutableScript> executableCombineScript;\n+ if (combineScript != null) {\n+ executableCombineScript = queryShardContext.getLazyExecutableScript(combineScript, ScriptContext.Standard.AGGS,\n+ Collections.emptyMap());\n+ } else {\n+ executableCombineScript = (p) -> null;\n+ }\n+ return new ScriptedMetricAggregatorFactory(name, type, searchMapScript, executableInitScript, executableCombineScript, reduceScript,\n+ params, context, parent, subfactoriesBuilder, metaData);\n }\n \n+\n @Override\n protected XContentBuilder internalXContent(XContentBuilder builder, Params builderParams) throws IOException {\n builder.startObject();", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.aggregations.metrics.scripted;\n \n import org.apache.lucene.index.LeafReaderContext;\n+import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.script.ExecutableScript;\n import org.elasticsearch.script.LeafSearchScript;\n import org.elasticsearch.script.Script;\n@@ -46,21 +47,14 @@ public class ScriptedMetricAggregator extends MetricsAggregator {\n private final Script reduceScript;\n private Map<String, Object> params;\n \n- protected ScriptedMetricAggregator(String name, Script initScript, Script mapScript, Script combineScript, Script reduceScript,\n+ protected ScriptedMetricAggregator(String name, SearchScript mapScript, ExecutableScript combineScript,\n+ Script reduceScript,\n Map<String, Object> params, AggregationContext context, Aggregator parent, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData)\n throws IOException {\n super(name, context, parent, pipelineAggregators, metaData);\n this.params = params;\n- ScriptService scriptService = context.searchContext().scriptService();\n- if (initScript != null) {\n- scriptService.executable(initScript, ScriptContext.Standard.AGGS, Collections.emptyMap()).run();\n- }\n- this.mapScript = scriptService.search(context.searchContext().lookup(), mapScript, ScriptContext.Standard.AGGS, Collections.emptyMap());\n- if (combineScript != null) {\n- this.combineScript = scriptService.executable(combineScript, ScriptContext.Standard.AGGS, Collections.emptyMap());\n- } else {\n- this.combineScript = null;\n- }\n+ this.mapScript = mapScript;\n+ this.combineScript = combineScript;\n this.reduceScript = reduceScript;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java", "status": "modified" }, { "diff": "@@ -19,7 +19,9 @@\n \n package org.elasticsearch.search.aggregations.metrics.scripted;\n \n+import org.elasticsearch.script.ExecutableScript;\n import org.elasticsearch.script.Script;\n+import org.elasticsearch.script.SearchScript;\n import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n@@ -34,22 +36,23 @@\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n-import java.util.Map.Entry;\n+import java.util.function.Function;\n \n public class ScriptedMetricAggregatorFactory extends AggregatorFactory<ScriptedMetricAggregatorFactory> {\n \n- private final Script initScript;\n- private final Script mapScript;\n- private final Script combineScript;\n+ private final Function<Map<String, Object>, SearchScript> mapScript;\n+ private final Function<Map<String, Object>, ExecutableScript> combineScript;\n private final Script reduceScript;\n private final Map<String, Object> params;\n+ private final Function<Map<String, Object>, ExecutableScript> initScript;\n \n- public ScriptedMetricAggregatorFactory(String name, Type type, Script initScript, Script mapScript, Script combineScript,\n+ public ScriptedMetricAggregatorFactory(String name, Type type, Function<Map<String, Object>, SearchScript> mapScript,\n+ Function<Map<String, Object>, ExecutableScript> initScript, Function<Map<String, Object>, ExecutableScript> combineScript,\n Script reduceScript, Map<String, Object> params, AggregationContext context, AggregatorFactory<?> parent,\n AggregatorFactories.Builder subFactories, Map<String, Object> metaData) throws IOException {\n super(name, type, context, parent, subFactories, metaData);\n- this.initScript = initScript;\n this.mapScript = mapScript;\n+ this.initScript = initScript;\n this.combineScript = combineScript;\n this.reduceScript = reduceScript;\n this.params = params;\n@@ -68,16 +71,18 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu\n params = new HashMap<>();\n params.put(\"_agg\", new HashMap<String, Object>());\n }\n- return new ScriptedMetricAggregator(name, insertParams(initScript, params), insertParams(mapScript, params),\n- insertParams(combineScript, params), deepCopyScript(reduceScript, context.searchContext()), params, context, parent,\n- pipelineAggregators, metaData);\n- }\n \n- private static Script insertParams(Script script, Map<String, Object> params) {\n- if (script == null) {\n- return null;\n+ final ExecutableScript initScript = this.initScript.apply(params);\n+ final SearchScript mapScript = this.mapScript.apply(params);\n+ final ExecutableScript combineScript = this.combineScript.apply(params);\n+\n+ final Script reduceScript = deepCopyScript(this.reduceScript, context.searchContext());\n+ if (initScript != null) {\n+ initScript.run();\n }\n- return new Script(script.getScript(), script.getType(), script.getLang(), params);\n+ return new ScriptedMetricAggregator(name, mapScript,\n+ combineScript, reduceScript, params, context, parent,\n+ pipelineAggregators, metaData);\n }\n \n private static Script deepCopyScript(Script script, SearchContext context) {\n@@ -98,26 +103,27 @@ private static <T> T deepCopyParams(T original, SearchContext context) {\n if (original instanceof Map) {\n Map<?, ?> originalMap = (Map<?, ?>) original;\n Map<Object, Object> clonedMap = new HashMap<>();\n- for (Entry<?, ?> e : originalMap.entrySet()) {\n+ for (Map.Entry<?, ?> e : originalMap.entrySet()) {\n clonedMap.put(deepCopyParams(e.getKey(), context), deepCopyParams(e.getValue(), context));\n }\n clone = (T) clonedMap;\n } else if (original instanceof List) {\n List<?> originalList = (List<?>) original;\n- List<Object> clonedList = new ArrayList<Object>();\n+ List<Object> clonedList = new ArrayList<>();\n for (Object o : originalList) {\n clonedList.add(deepCopyParams(o, context));\n }\n clone = (T) clonedList;\n } else if (original instanceof String || original instanceof Integer || original instanceof Long || original instanceof Short\n- || original instanceof Byte || original instanceof Float || original instanceof Double || original instanceof Character\n- || original instanceof Boolean) {\n+ || original instanceof Byte || original instanceof Float || original instanceof Double || original instanceof Character\n+ || original instanceof Boolean) {\n clone = original;\n } else {\n throw new SearchParseException(context,\n- \"Can only clone primitives, String, ArrayList, and HashMap. Found: \" + original.getClass().getCanonicalName(), null);\n+ \"Can only clone primitives, String, ArrayList, and HashMap. Found: \" + original.getClass().getCanonicalName(), null);\n }\n return clone;\n }\n \n+\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java", "status": "modified" }, { "diff": "@@ -530,20 +530,17 @@ public TopHitsAggregationBuilder subAggregations(Builder subFactories) {\n @Override\n protected TopHitsAggregatorFactory doBuild(AggregationContext context, AggregatorFactory<?> parent, Builder subfactoriesBuilder)\n throws IOException {\n- List<ScriptFieldsContext.ScriptField> scriptFields = null;\n- if (this.scriptFields != null) {\n- scriptFields = new ArrayList<>();\n- for (ScriptField field : this.scriptFields) {\n- SearchScript searchScript = context.searchContext().scriptService().search(\n- context.searchContext().lookup(), field.script(), ScriptContext.Standard.SEARCH, Collections.emptyMap());\n- scriptFields.add(new ScriptFieldsContext.ScriptField(\n- field.fieldName(), searchScript, field.ignoreFailure()));\n+ List<ScriptFieldsContext.ScriptField> fields = new ArrayList<>();\n+ if (scriptFields != null) {\n+ for (ScriptField field : scriptFields) {\n+ SearchScript searchScript = context.searchContext().getQueryShardContext().getSearchScript(field.script(),\n+ ScriptContext.Standard.SEARCH, Collections.emptyMap());\n+ fields.add(new org.elasticsearch.search.fetch.subphase.ScriptFieldsContext.ScriptField(\n+ field.fieldName(), searchScript, field.ignoreFailure()));\n }\n- } else {\n- scriptFields = Collections.emptyList();\n }\n return new TopHitsAggregatorFactory(name, type, from, size, explain, version, trackScores, sorts, highlightBuilder,\n- storedFieldsContext, fieldDataFields, scriptFields, fetchSourceContext, context,\n+ storedFieldsContext, fieldDataFields, fields, fetchSourceContext, context,\n parent, subfactoriesBuilder, metaData);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -56,8 +56,8 @@ public class TopHitsAggregatorFactory extends AggregatorFactory<TopHitsAggregato\n public TopHitsAggregatorFactory(String name, Type type, int from, int size, boolean explain, boolean version, boolean trackScores,\n List<SortBuilder<?>> sorts, HighlightBuilder highlightBuilder, StoredFieldsContext storedFieldsContext,\n List<String> docValueFields, List<ScriptFieldsContext.ScriptField> scriptFields, FetchSourceContext fetchSourceContext,\n- AggregationContext context, AggregatorFactory<?> parent, AggregatorFactories.Builder subFactories,\n- Map<String, Object> metaData) throws IOException {\n+ AggregationContext context, AggregatorFactory<?> parent, AggregatorFactories.Builder subFactories, Map<String, Object> metaData)\n+ throws IOException {\n super(name, type, context, parent, subFactories, metaData);\n this.from = from;\n this.size = size;\n@@ -96,7 +96,7 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu\n }\n for (ScriptFieldsContext.ScriptField field : scriptFields) {\n subSearchContext.scriptFields().add(field);\n- }\n+ }\n if (fetchSourceContext != null) {\n subSearchContext.fetchSourceContext(fetchSourceContext);\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java", "status": "modified" }, { "diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregatorFactory;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;\n import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregatorFactory;\n import org.elasticsearch.search.aggregations.pipeline.AbstractPipelineAggregationBuilder;\n@@ -141,7 +142,7 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n }\n Long xAxisUnits = null;\n if (units != null) {\n- DateTimeUnit dateTimeUnit = DateHistogramAggregatorFactory.DATE_FIELD_UNITS.get(units);\n+ DateTimeUnit dateTimeUnit = DateHistogramAggregationBuilder.DATE_FIELD_UNITS.get(units);\n if (dateTimeUnit != null) {\n xAxisUnits = dateTimeUnit.field(DateTimeZone.UTC).getDurationField().getUnitMillis();\n } else {", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -97,7 +97,7 @@ public <VS extends ValuesSource> VS valuesSource(ValuesSourceConfig<VS> config,\n } else {\n if (config.fieldContext() != null && config.fieldContext().fieldType() != null) {\n missing = config.fieldContext().fieldType().docValueFormat(null, DateTimeZone.UTC)\n- .parseDouble(config.missing().toString(), false, context::nowInMillis);\n+ .parseDouble(config.missing().toString(), false, context.getQueryShardContext()::nowInMillis);\n } else {\n missing = Double.parseDouble(config.missing().toString());\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java", "status": "modified" }, { "diff": "@@ -376,8 +376,11 @@ public ValuesSourceConfig<VS> config(AggregationContext context) {\n }\n \n private SearchScript createScript(Script script, SearchContext context) {\n- return script == null ? null\n- : context.scriptService().search(context.lookup(), script, ScriptContext.Standard.AGGS, Collections.emptyMap());\n+ if (script == null) {\n+ return null;\n+ } else {\n+ return context.getQueryShardContext().getSearchScript(script, ScriptContext.Standard.AGGS, Collections.emptyMap());\n+ }\n }\n \n private static DocValueFormat resolveFormat(@Nullable String format, @Nullable ValueType valueType) {", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceAggregationBuilder.java", "status": "modified" } ] }
{ "body": "Today when parsing a request, Elasticsearch silently ignores incorrect\n(including parameters with typos) or unused parameters. This is bad as\nit leads to requests having unintended behavior (e.g., if a user hits\nthe _analyze API and misspell the \"tokenizer\" then Elasticsearch will\njust use the standard analyzer, completely against intentions).\n\nThis commit removes lenient URL parameter parsing. The strategy is\nsimple: when a request is handled and a parameter is touched, we mark it\nas such. Before the request is actually executed, we check to ensure\nthat all parameters have been consumed. If there are remaining\nparameters yet to be consumed, we fail the request with a list of the\nunconsumed parameters. An exception has to be made for parameters that\nformat the response (as opposed to controlling the request); for this\ncase, handlers are able to provide a list of parameters that should be\nexcluded from tripping the unconsumed parameters check because those\nparameters will be used in formatting the response.\n\nAdditionally, some inconsistencies between the parameters in the code\nand in the docs are corrected.\n\nCloses #14719\n", "comments": [ { "body": "I'm as torn as @s1monw on versioning. I suppose getting this into 5.1 violates semver because garbage on the URL would be identified. I'd love to have it in 5.0 but I think it violates the spirit of code freeze to try and do it. It is super tempting though.\n", "created_at": "2016-10-04T12:33:20Z" }, { "body": "After conversation with @nik9000 and @s1monw, we have decided the leniency here is big enough a bug that we want to ship this code as early as possible. The options on the table were:\n- 5.0.0\n- 5.1.0\n- 6.0.0\n\nWe felt that 5.1.0 is not a viable option, it is too breaking of a change during a minor release, and waiting until 6.0.0 is waiting too long to fix this bug so we will ship this in 5.0.0.\n", "created_at": "2016-10-04T16:14:28Z" } ], "number": 20722, "title": "Remove lenient URL parameter parsing" }
{ "body": "This commit adds a did you mean feature to the strict REST params error\nmessage. This works by comparing any unconsumed parameters to all of the\nconsumer parameters, comparing the Levenstein distance between those\nparameters, and taking any consumed parameters that are close to an\nunconsumed parameter as candiates for the did you mean.\n\nRelates #20722\n", "number": 20747, "review_comments": [ { "body": "We use `0.7f` here for the settings \"did you mean\": https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java#L258\n\nShould we use the same here? Or even consider using the same code from both places?\n", "created_at": "2016-10-05T00:33:08Z" }, { "body": "I don't think so. The URL parameters tend to be shorter than settings keys. If we use the same distance, we will miss candidates for typos like \"flied\" for \"field\" because even though there is only one transposition, the distance in this case is 0.6.\n", "created_at": "2016-10-05T00:38:13Z" }, { "body": "Also, as far as using the same code, I really don't want to try to make an abstraction that captures both uses, they are slightly different. Abstracting them is possible, but it just makes it messy and we only have two uses to base the abstraction on. If we ever hit a third, we can maybe consider it. For now, I strongly prefer keeping it simple and as-is.\n", "created_at": "2016-10-05T00:39:21Z" }, { "body": "good to know, makes sense\n", "created_at": "2016-10-05T00:39:23Z" } ], "title": "Add did you mean to strict REST params" }
{ "commits": [ { "message": "Add did you mean to strict REST params\n\nThis commit adds a did you mean feature to the strict REST params error\nmessage. This works by comparing any unconsumed parameters to all of the\nconsumer parameters, comparing the Levenstein distance between those\nparameters, and taking any consumed parameters that are close to an\nunconsumed parameter as candiates for the did you mean." }, { "message": "Fix pluralization in strict REST params message\n\nThis commit fixes the pluralization in the strict REST parameters error\nmessage so that the word \"parameter\" is not unconditionally written as\n\"parameters\" even when there is only one unrecognized parameter." }, { "message": "Strength strict REST params did you mean test\n\nThis commit adds an unconsumed parameter that is too far from every\nconsumed parameter to have any candidate suggestions." } ], "files": [ { "diff": "@@ -19,19 +19,25 @@\n \n package org.elasticsearch.rest;\n \n+import org.apache.lucene.search.spell.LevensteinDistance;\n+import org.apache.lucene.util.CollectionUtil;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.common.ParseFieldMatcher;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Setting.Property;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.plugins.ActionPlugin;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n import java.util.Collections;\n import java.util.List;\n import java.util.Locale;\n import java.util.Set;\n+import java.util.SortedSet;\n+import java.util.TreeSet;\n import java.util.stream.Collectors;\n \n /**\n@@ -59,16 +65,44 @@ public final void handleRequest(RestRequest request, RestChannel channel, NodeCl\n final RestChannelConsumer action = prepareRequest(request, client);\n \n // validate unconsumed params, but we must exclude params used to format the response\n- final List<String> unconsumedParams =\n- request.unconsumedParams().stream().filter(p -> !responseParams().contains(p)).collect(Collectors.toList());\n+ // use a sorted set so the unconsumed parameters appear in a reliable sorted order\n+ final SortedSet<String> unconsumedParams =\n+ request.unconsumedParams().stream().filter(p -> !responseParams().contains(p)).collect(Collectors.toCollection(TreeSet::new));\n \n // validate the non-response params\n if (!unconsumedParams.isEmpty()) {\n- final String message = String.format(\n+ String message = String.format(\n Locale.ROOT,\n- \"request [%s] contains unrecognized parameters: %s\",\n+ \"request [%s] contains unrecognized parameter%s: \",\n request.path(),\n- unconsumedParams.toString());\n+ unconsumedParams.size() > 1 ? \"s\" : \"\");\n+ boolean first = true;\n+ for (final String unconsumedParam : unconsumedParams) {\n+ final LevensteinDistance ld = new LevensteinDistance();\n+ final List<Tuple<Float, String>> scoredParams = new ArrayList<>();\n+ for (String consumedParam : request.consumedParams()) {\n+ final float distance = ld.getDistance(unconsumedParam, consumedParam);\n+ if (distance > 0.5f) {\n+ scoredParams.add(new Tuple<>(distance, consumedParam));\n+ }\n+ }\n+ CollectionUtil.timSort(scoredParams, (a, b) -> {\n+ // sort by distance in reverse order, then parameter name for equal distances\n+ int compare = a.v1().compareTo(b.v1());\n+ if (compare != 0) return -compare;\n+ else return a.v2().compareTo(b.v2());\n+ });\n+ if (first == false) {\n+ message += \", \";\n+ }\n+ message += \"[\" + unconsumedParam + \"]\";\n+ final List<String> keys = scoredParams.stream().map(Tuple::v2).collect(Collectors.toList());\n+ if (keys.isEmpty() == false) {\n+ message += \" -> did you mean \" + (keys.size() == 1 ? \"[\" + keys.get(0) + \"]\": \"any of \" + keys.toString()) + \"?\";\n+ }\n+ first = false;\n+ }\n+\n throw new IllegalArgumentException(message);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java", "status": "modified" }, { "diff": "@@ -28,7 +28,6 @@\n import org.elasticsearch.common.xcontent.ToXContent;\n \n import java.net.SocketAddress;\n-import java.util.Collections;\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.List;\n@@ -129,6 +128,16 @@ public Map<String, String> params() {\n return params;\n }\n \n+ /**\n+ * Returns a list of parameters that have been consumed. This method returns a copy, callers\n+ * are free to modify the returned list.\n+ *\n+ * @return the list of currently consumed parameters.\n+ */\n+ List<String> consumedParams() {\n+ return consumedParams.stream().collect(Collectors.toList());\n+ }\n+\n /**\n * Returns a list of parameters that have not yet been consumed. This method returns a copy,\n * callers are free to modify the returned list.", "filename": "core/src/main/java/org/elasticsearch/rest/RestRequest.java", "status": "modified" }, { "diff": "@@ -73,8 +73,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC\n analyzeRequest.text(texts);\n analyzeRequest.analyzer(request.param(\"analyzer\"));\n analyzeRequest.field(request.param(\"field\"));\n- if (request.hasParam(\"tokenizer\")) {\n- analyzeRequest.tokenizer(request.param(\"tokenizer\"));\n+ final String tokenizer = request.param(\"tokenizer\");\n+ if (tokenizer != null) {\n+ analyzeRequest.tokenizer(tokenizer);\n }\n for (String filter : request.paramAsStringArray(\"filter\", Strings.EMPTY_ARRAY)) {\n analyzeRequest.addTokenFilter(filter);", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestAnalyzeAction.java", "status": "modified" }, { "diff": "@@ -33,14 +33,34 @@\n import java.util.Set;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n-import static org.hamcrest.core.AnyOf.anyOf;\n import static org.hamcrest.core.StringContains.containsString;\n import static org.hamcrest.object.HasToString.hasToString;\n import static org.mockito.Mockito.mock;\n \n public class BaseRestHandlerTests extends ESTestCase {\n \n- public void testUnconsumedParameters() throws Exception {\n+ public void testOneUnconsumedParameters() throws Exception {\n+ final AtomicBoolean executed = new AtomicBoolean();\n+ BaseRestHandler handler = new BaseRestHandler(Settings.EMPTY) {\n+ @Override\n+ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {\n+ request.param(\"consumed\");\n+ return channel -> executed.set(true);\n+ }\n+ };\n+\n+ final HashMap<String, String> params = new HashMap<>();\n+ params.put(\"consumed\", randomAsciiOfLength(8));\n+ params.put(\"unconsumed\", randomAsciiOfLength(8));\n+ RestRequest request = new FakeRestRequest.Builder().withParams(params).build();\n+ RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);\n+ final IllegalArgumentException e =\n+ expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class)));\n+ assertThat(e, hasToString(containsString(\"request [/] contains unrecognized parameter: [unconsumed]\")));\n+ assertFalse(executed.get());\n+ }\n+\n+ public void testMultipleUnconsumedParameters() throws Exception {\n final AtomicBoolean executed = new AtomicBoolean();\n BaseRestHandler handler = new BaseRestHandler(Settings.EMPTY) {\n @Override\n@@ -56,14 +76,44 @@ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient cli\n params.put(\"unconsumed-second\", randomAsciiOfLength(8));\n RestRequest request = new FakeRestRequest.Builder().withParams(params).build();\n RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);\n+ final IllegalArgumentException e =\n+ expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class)));\n+ assertThat(e, hasToString(containsString(\"request [/] contains unrecognized parameters: [unconsumed-first], [unconsumed-second]\")));\n+ assertFalse(executed.get());\n+ }\n+\n+ public void testUnconsumedParametersDidYouMean() throws Exception {\n+ final AtomicBoolean executed = new AtomicBoolean();\n+ BaseRestHandler handler = new BaseRestHandler(Settings.EMPTY) {\n+ @Override\n+ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {\n+ request.param(\"consumed\");\n+ request.param(\"field\");\n+ request.param(\"tokenizer\");\n+ request.param(\"very_close_to_parameter_1\");\n+ request.param(\"very_close_to_parameter_2\");\n+ return channel -> executed.set(true);\n+ }\n+ };\n+\n+ final HashMap<String, String> params = new HashMap<>();\n+ params.put(\"consumed\", randomAsciiOfLength(8));\n+ params.put(\"flied\", randomAsciiOfLength(8));\n+ params.put(\"tokenzier\", randomAsciiOfLength(8));\n+ params.put(\"very_close_to_parametre\", randomAsciiOfLength(8));\n+ params.put(\"very_far_from_every_consumed_parameter\", randomAsciiOfLength(8));\n+ RestRequest request = new FakeRestRequest.Builder().withParams(params).build();\n+ RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);\n final IllegalArgumentException e =\n expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class)));\n assertThat(\n e,\n- // we can not rely on ordering of the unconsumed parameters here\n- anyOf(\n- hasToString(containsString(\"request [/] contains unrecognized parameters: [unconsumed-first, unconsumed-second]\")),\n- hasToString(containsString(\"request [/] contains unrecognized parameters: [unconsumed-second, unconsumed-first]\"))));\n+ hasToString(containsString(\n+ \"request [/] contains unrecognized parameters: \" +\n+ \"[flied] -> did you mean [field]?, \" +\n+ \"[tokenzier] -> did you mean [tokenizer]?, \" +\n+ \"[very_close_to_parametre] -> did you mean any of [very_close_to_parameter_1, very_close_to_parameter_2]?, \" +\n+ \"[very_far_from_every_consumed_parameter]\")));\n assertFalse(executed.get());\n }\n ", "filename": "core/src/test/java/org/elasticsearch/rest/BaseRestHandlerTests.java", "status": "modified" } ] }
{ "body": "Today when parsing a request, Elasticsearch silently ignores incorrect\n(including parameters with typos) or unused parameters. This is bad as\nit leads to requests having unintended behavior (e.g., if a user hits\nthe _analyze API and misspell the \"tokenizer\" then Elasticsearch will\njust use the standard analyzer, completely against intentions).\n\nThis commit removes lenient URL parameter parsing. The strategy is\nsimple: when a request is handled and a parameter is touched, we mark it\nas such. Before the request is actually executed, we check to ensure\nthat all parameters have been consumed. If there are remaining\nparameters yet to be consumed, we fail the request with a list of the\nunconsumed parameters. An exception has to be made for parameters that\nformat the response (as opposed to controlling the request); for this\ncase, handlers are able to provide a list of parameters that should be\nexcluded from tripping the unconsumed parameters check because those\nparameters will be used in formatting the response.\n\nAdditionally, some inconsistencies between the parameters in the code\nand in the docs are corrected.\n\nCloses #14719\n", "comments": [ { "body": "I'm as torn as @s1monw on versioning. I suppose getting this into 5.1 violates semver because garbage on the URL would be identified. I'd love to have it in 5.0 but I think it violates the spirit of code freeze to try and do it. It is super tempting though.\n", "created_at": "2016-10-04T12:33:20Z" }, { "body": "After conversation with @nik9000 and @s1monw, we have decided the leniency here is big enough a bug that we want to ship this code as early as possible. The options on the table were:\n- 5.0.0\n- 5.1.0\n- 6.0.0\n\nWe felt that 5.1.0 is not a viable option, it is too breaking of a change during a minor release, and waiting until 6.0.0 is waiting too long to fix this bug so we will ship this in 5.0.0.\n", "created_at": "2016-10-04T16:14:28Z" } ], "number": 20722, "title": "Remove lenient URL parameter parsing" }
{ "body": "This commit changes the strict REST parameters message to say that\nunconsumed parameters are unrecognized rather than unused. Additionally,\nthe test is beefed up to include two unused parameters.\n\nRelates #20722\n", "number": 20745, "review_comments": [], "title": "Clarify wording for the strict REST params message" }
{ "commits": [ { "message": "Clarify wording for the strict REST params message\n\nThis commit changes the strict REST parameters message to say that\nunconsumed parameters are unrecognized rather than unused. Additionally,\nthe test is beefed up to include two unused parameters." } ], "files": [ { "diff": "@@ -30,6 +30,7 @@\n import java.io.IOException;\n import java.util.Collections;\n import java.util.List;\n+import java.util.Locale;\n import java.util.Set;\n import java.util.stream.Collectors;\n \n@@ -63,7 +64,12 @@ public final void handleRequest(RestRequest request, RestChannel channel, NodeCl\n \n // validate the non-response params\n if (!unconsumedParams.isEmpty()) {\n- throw new IllegalArgumentException(\"request [\" + request.path() + \"] contains unused params: \" + unconsumedParams.toString());\n+ final String message = String.format(\n+ Locale.ROOT,\n+ \"request [%s] contains unrecognized parameters: %s\",\n+ request.path(),\n+ unconsumedParams.toString());\n+ throw new IllegalArgumentException(message);\n }\n \n // execute the action", "filename": "core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import java.util.Set;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n+import static org.hamcrest.core.AnyOf.anyOf;\n import static org.hamcrest.core.StringContains.containsString;\n import static org.hamcrest.object.HasToString.hasToString;\n import static org.mockito.Mockito.mock;\n@@ -51,12 +52,18 @@ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient cli\n \n final HashMap<String, String> params = new HashMap<>();\n params.put(\"consumed\", randomAsciiOfLength(8));\n- params.put(\"unconsumed\", randomAsciiOfLength(8));\n+ params.put(\"unconsumed-first\", randomAsciiOfLength(8));\n+ params.put(\"unconsumed-second\", randomAsciiOfLength(8));\n RestRequest request = new FakeRestRequest.Builder().withParams(params).build();\n RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1);\n final IllegalArgumentException e =\n expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class)));\n- assertThat(e, hasToString(containsString(\"request [/] contains unused params: [unconsumed]\")));\n+ assertThat(\n+ e,\n+ // we can not rely on ordering of the unconsumed parameters here\n+ anyOf(\n+ hasToString(containsString(\"request [/] contains unrecognized parameters: [unconsumed-first, unconsumed-second]\")),\n+ hasToString(containsString(\"request [/] contains unrecognized parameters: [unconsumed-second, unconsumed-first]\"))));\n assertFalse(executed.get());\n }\n ", "filename": "core/src/test/java/org/elasticsearch/rest/BaseRestHandlerTests.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue. Note that whether you're filing a bug report or a\nfeature request, ensure that your submission is for an\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\nBug reports on an OS that we do not support or feature requests\nspecific to an OS that we do not support will be closed.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 5.0.0-beta1\n\n**Plugins installed**: [none]\n\n**JVM version**: jdk1.8.0_65\n\n**OS version**: Linux 2.6.32-042stab108.8 #1 SMP Wed Jul 22 17:23:23 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux\n\n**Description of the problem including expected versus actual behavior**: On startup of elasticsearch master only node: starts, generates 1263 identical errors in the log (provided below) and crashes. Expected the clean start.\n**Steps to reproduce**:\n1. Download and install rpm version elasticsearch-5.0.0-beta1.rpm\n2. Make sure 'service elasticsearch start' command starts the elasticsearch fine. I had to modify JAVA location in /etc/sysconfig/elasticsearch for it to start correctly.\n3. Replace /etc/elasticsearch/elasticsearch.yml with attached elasticsearch.yml file. In that file none of the hosts that are specified in unicast list exists \n4. Run 'service elasticsearch stop'\n5. Run 'service elasticsearch start'\n6. Wait for a few seconds. Servers prints 1263 errors into the /var/log/elasticsearch.log and crashes\n\n**Provide logs (if relevant)**:\nProvided configuration file and log file\n[elasticsearch.log.gz](https://github.com/elastic/elasticsearch/files/494356/elasticsearch.log.gz)\n[elasticsearch.yml.gz](https://github.com/elastic/elasticsearch/files/494355/elasticsearch.yml.gz)\n", "comments": [ { "body": "Thanks for reporting.\n\nI suspect the `discovery.zen.ping_timeout: 5s` is used to calculate the `join_timeout` setting which is `ping_timeout x 20` by default. Then the resulting `100s` value is rendered as a fractional time value string by the setting framework before being parsed again... and failed.\n", "created_at": "2016-09-27T08:43:11Z" } ], "number": 20662, "title": "1263) Error injecting constructor, java.lang.IllegalArgumentException: failed to parse [1.6m], fractional time values are not supported" }
{ "body": " The Setting.timeValue() method uses TimeValue.toString() which can produce fractional time values. These fractional time values cannot be parsed again by the settings framework.\n\nThis commit fix a method that still use the .toString() method and replaces it with .getStringRep(). It also changes a second method so that it's not up to the caller to decide which stringify method to call.\n\ncloses #20662\n", "number": 20696, "review_comments": [], "title": "Fix Setting.timeValue() method" }
{ "commits": [ { "message": "Fix Setting.timeValue() methods\n\nThe Setting.timeValue() method uses TimeValue.toString() which can produce fractional time values. These fractional time values cannot be parsed again by the settings framework.\n\nThis commit fix a method that still use the .toString() method and replaces it with .getStringRep(). It also changes a second method so that it's not up to the caller to decide which stringify method to call.\n\ncloses #20662" } ], "files": [ { "diff": "@@ -636,10 +636,6 @@ public static Setting<ByteSizeValue> memorySizeSetting(String key, String defaul\n return new Setting<>(key, (s) -> defaultPercentage, (s) -> MemorySizeValue.parseBytesSizeValueOrHeapRatio(s, key), properties);\n }\n \n- public static Setting<TimeValue> positiveTimeSetting(String key, TimeValue defaultValue, Property... properties) {\n- return timeSetting(key, defaultValue, TimeValue.timeValueMillis(0), properties);\n- }\n-\n public static <T> Setting<List<T>> listSetting(String key, List<String> defaultStringValue, Function<String, T> singleValueParser,\n Property... properties) {\n return listSetting(key, (s) -> defaultStringValue, singleValueParser, properties);\n@@ -795,9 +791,9 @@ public String toString() {\n };\n }\n \n- public static Setting<TimeValue> timeSetting(String key, Function<Settings, String> defaultValue, TimeValue minValue,\n+ public static Setting<TimeValue> timeSetting(String key, Function<Settings, TimeValue> defaultValue, TimeValue minValue,\n Property... properties) {\n- return new Setting<>(key, defaultValue, (s) -> {\n+ return new Setting<>(key, (s) -> defaultValue.apply(s).getStringRep(), (s) -> {\n TimeValue timeValue = TimeValue.parseTimeValue(s, null, key);\n if (timeValue.millis() < minValue.millis()) {\n throw new IllegalArgumentException(\"Failed to parse value [\" + s + \"] for setting [\" + key + \"] must be >= \" + minValue);\n@@ -807,17 +803,21 @@ public static Setting<TimeValue> timeSetting(String key, Function<Settings, Stri\n }\n \n public static Setting<TimeValue> timeSetting(String key, TimeValue defaultValue, TimeValue minValue, Property... properties) {\n- return timeSetting(key, (s) -> defaultValue.getStringRep(), minValue, properties);\n+ return timeSetting(key, (s) -> defaultValue, minValue, properties);\n }\n \n public static Setting<TimeValue> timeSetting(String key, TimeValue defaultValue, Property... properties) {\n- return new Setting<>(key, (s) -> defaultValue.toString(), (s) -> TimeValue.parseTimeValue(s, key), properties);\n+ return new Setting<>(key, (s) -> defaultValue.getStringRep(), (s) -> TimeValue.parseTimeValue(s, key), properties);\n }\n \n public static Setting<TimeValue> timeSetting(String key, Setting<TimeValue> fallbackSetting, Property... properties) {\n return new Setting<>(key, fallbackSetting, (s) -> TimeValue.parseTimeValue(s, key), properties);\n }\n \n+ public static Setting<TimeValue> positiveTimeSetting(String key, TimeValue defaultValue, Property... properties) {\n+ return timeSetting(key, defaultValue, TimeValue.timeValueMillis(0), properties);\n+ }\n+\n public static Setting<Double> doubleSetting(String key, double defaultValue, double minValue, Property... properties) {\n return new Setting<>(key, (s) -> Double.toString(defaultValue), (s) -> {\n final double d = Double.parseDouble(s);", "filename": "core/src/main/java/org/elasticsearch/common/settings/Setting.java", "status": "modified" }, { "diff": "@@ -249,6 +249,12 @@ public String format(PeriodType type) {\n return PeriodFormat.getDefault().withParseType(type).print(period);\n }\n \n+ /**\n+ * Returns a {@link String} representation of the current {@link TimeValue}.\n+ *\n+ * Note that this method might produce fractional time values (ex 1.6m) which cannot be\n+ * parsed by method like {@link TimeValue#parse(String, String, int)}.\n+ */\n @Override\n public String toString() {\n if (duration < 0) {", "filename": "core/src/main/java/org/elasticsearch/common/unit/TimeValue.java", "status": "modified" }, { "diff": "@@ -89,7 +89,7 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover\n Setting.positiveTimeSetting(\"discovery.zen.ping_timeout\", timeValueSeconds(3), Property.NodeScope);\n public static final Setting<TimeValue> JOIN_TIMEOUT_SETTING =\n Setting.timeSetting(\"discovery.zen.join_timeout\",\n- settings -> TimeValue.timeValueMillis(PING_TIMEOUT_SETTING.get(settings).millis() * 20).toString(),\n+ settings -> TimeValue.timeValueMillis(PING_TIMEOUT_SETTING.get(settings).millis() * 20),\n TimeValue.timeValueMillis(0), Property.NodeScope);\n public static final Setting<Integer> JOIN_RETRY_ATTEMPTS_SETTING =\n Setting.intSetting(\"discovery.zen.join_retry_attempts\", 3, 1, Property.NodeScope);\n@@ -101,7 +101,7 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover\n Setting.boolSetting(\"discovery.zen.send_leave_request\", true, Property.NodeScope);\n public static final Setting<TimeValue> MASTER_ELECTION_WAIT_FOR_JOINS_TIMEOUT_SETTING =\n Setting.timeSetting(\"discovery.zen.master_election.wait_for_joins_timeout\",\n- settings -> TimeValue.timeValueMillis(JOIN_TIMEOUT_SETTING.get(settings).millis() / 2).toString(), TimeValue.timeValueMillis(0),\n+ settings -> TimeValue.timeValueMillis(JOIN_TIMEOUT_SETTING.get(settings).millis() / 2), TimeValue.timeValueMillis(0),\n Property.NodeScope);\n public static final Setting<Boolean> MASTER_ELECTION_IGNORE_NON_MASTER_PINGS_SETTING =\n Setting.boolSetting(\"discovery.zen.master_election.ignore_non_master_pings\", false, Property.NodeScope);", "filename": "core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java", "status": "modified" }, { "diff": "@@ -61,7 +61,7 @@ public class RecoverySettings extends AbstractComponent {\n */\n public static final Setting<TimeValue> INDICES_RECOVERY_INTERNAL_LONG_ACTION_TIMEOUT_SETTING =\n Setting.timeSetting(\"indices.recovery.internal_action_long_timeout\",\n- (s) -> TimeValue.timeValueMillis(INDICES_RECOVERY_INTERNAL_ACTION_TIMEOUT_SETTING.get(s).millis() * 2).toString(),\n+ (s) -> TimeValue.timeValueMillis(INDICES_RECOVERY_INTERNAL_ACTION_TIMEOUT_SETTING.get(s).millis() * 2),\n TimeValue.timeValueSeconds(0), Property.Dynamic, Property.NodeScope);\n \n /**\n@@ -70,7 +70,7 @@ public class RecoverySettings extends AbstractComponent {\n */\n public static final Setting<TimeValue> INDICES_RECOVERY_ACTIVITY_TIMEOUT_SETTING =\n Setting.timeSetting(\"indices.recovery.recovery_activity_timeout\",\n- (s) -> INDICES_RECOVERY_INTERNAL_LONG_ACTION_TIMEOUT_SETTING.getRaw(s) , TimeValue.timeValueSeconds(0),\n+ INDICES_RECOVERY_INTERNAL_LONG_ACTION_TIMEOUT_SETTING::get, TimeValue.timeValueSeconds(0),\n Property.Dynamic, Property.NodeScope);\n \n public static final ByteSizeValue DEFAULT_CHUNK_SIZE = new ByteSizeValue(512, ByteSizeUnit.KB);", "filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoverySettings.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import java.util.function.Function;\n \n import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n \n public class SettingTests extends ESTestCase {\n@@ -517,4 +518,16 @@ public void testRejectNullProperties() {\n assertThat(ex.getMessage(), containsString(\"properties cannot be null for setting\"));\n }\n }\n+\n+ public void testTimeValue() {\n+ final TimeValue random = TimeValue.parseTimeValue(randomTimeValue(), \"test\");\n+\n+ Setting<TimeValue> setting = Setting.timeSetting(\"foo\", random);\n+ assertThat(setting.get(Settings.EMPTY), equalTo(random));\n+\n+ final int factor = randomIntBetween(1, 10);\n+ setting = Setting.timeSetting(\"foo\", (s) -> TimeValue.timeValueMillis(random.getMillis() * factor), TimeValue.ZERO);\n+ assertThat(setting.get(Settings.builder().put(\"foo\", \"12h\").build()), equalTo(TimeValue.timeValueHours(12)));\n+ assertThat(setting.get(Settings.EMPTY).getMillis(), equalTo(random.getMillis() * factor));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/settings/SettingTests.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: master\n\n**JVM version**: 1.8\n\n**OS version**: OSX\n\n**Describe the feature**: \n\nWhen we using `us-east-1`, s3 plugin will first get the endpoint by using the `region`, and then use the endpoint to create the client, this is fine. \n\nBut when creating the bucket, s3 plugin will will the `bucket` name to build a `CreateBucketRequest` ([link](https://github.com/elastic/elasticsearch/blob/mase/plugins/repository-s3/src/main/java/org/elasticsearch/cloud/aws/blobstore/S3BlobStore.java#L96)), `us-east-1` is not valid for aws s3 sdk.\n\nReproduce:\n\n```\nPUT _snapshot/test-1\n{\n \"type\": \"s3\",\n \"settings\": {\n \"bucket\": \"test-us-east-1\",\n \"region\": \"us-east-1\"\n }\n}\n```\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"repository_exception\",\n \"reason\": \"[test-6] failed to create repository\"\n }\n ],\n \"type\": \"repository_exception\",\n \"reason\": \"[test-6] failed to create repository\",\n \"caused_by\": {\n \"type\": \"creation_exception\",\n \"reason\": \"Guice creation errors:\\n\\n1) Error injecting constructor, com.amazonaws.services.s3.model.AmazonS3Exception: The specified location-constraint is not valid (Service: Amazon S3; Status Code: 400; Error Code: InvalidLocationConstraint; Request ID: 85CFF34E01878232), S3 Extended Request ID: Ob5XZJsy8IH7HaZy/moMNAgvaH3ZIrHN9fxyimecIp+xtMZI8nE/sc2YVIoTuf2SuEXyoiQP1wE=\\n at org.elasticsearch.repositories.s3.S3Repository.<init>(Unknown Source)\\n while locating org.elasticsearch.repositories.s3.S3Repository\\n while locating org.elasticsearch.repositories.Repository\\n\\n1 error\",\n \"caused_by\": {\n \"type\": \"amazon_s3_exception\",\n \"reason\": \"The specified location-constraint is not valid (Service: Amazon S3; Status Code: 400; Error Code: InvalidLocationConstraint; Request ID: 85CFF34E01878232)\"\n }\n }\n },\n \"status\": 500\n}\n```\n\nSucceed after removed the region\n\n```\nPUT _snapshot/test-1\n{\n \"type\": \"s3\",\n \"settings\": {\n \"bucket\": \"test-us-east-1\"\n }\n}\n\n```\n\nSame thing happened if we used `us-west` | `ap-southeast` | `eu-central`... \nI think if we could get the endpoint, we should using the valid region name and let user be able to create buckets.\n", "comments": [ { "body": "As per http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html: \"If you are creating a bucket on the US East (N. Virginia) region (us-east-1), you do not need to specify the location constraint\".\n\nOther regions work fine. I can take this up.\n", "created_at": "2016-09-28T21:43:25Z" }, { "body": "Knowing that we deprecated `region` in #22848 and removed it in #22853, I think we should close this issue as it won't be fixed.\r\n\r\n@clintongormley WDYT?", "created_at": "2017-02-24T15:52:32Z" }, { "body": "This issue wasn't even about region really, it was about auto bucket creation (failing because of specifying region). Since auto bucket creation is now gone, we can close.", "created_at": "2017-02-24T17:26:32Z" }, { "body": "Thanks @rjernst ", "created_at": "2017-02-24T17:27:14Z" }, { "body": "@dadoonet and @rjernst thanks.", "created_at": "2017-02-27T12:58:17Z" } ], "number": 16978, "title": "'us-east-1` is not a valid region when creating s3 bucket" }
{ "body": "Fix for #16978\n", "number": 20689, "review_comments": [], "title": "Fix for S3 plugin to specify correct region parameter for us-east-1. …" }
{ "commits": [ { "message": "Fix for S3 plugin to specify correct region parameter for us-east-1. #16978" } ], "files": [ { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.cloud.aws.blobstore;\n \n import com.amazonaws.AmazonClientException;\n+import com.amazonaws.regions.Regions;\n import com.amazonaws.services.s3.AmazonS3;\n import com.amazonaws.services.s3.model.AmazonS3Exception;\n import com.amazonaws.services.s3.model.CannedAccessControlList;\n@@ -74,7 +75,7 @@ public S3BlobStore(Settings settings, AmazonS3 client, String bucket, @Nullable\n this.numberOfRetries = maxRetries;\n this.storageClass = initStorageClass(storageClass);\n \n- // Note: the method client.doesBucketExist() may return 'true' is the bucket exists\n+ // Note: the method client.doesBucketExist() may return 'true' if the bucket exists\n // but we don't have access to it (ie, 403 Forbidden response code)\n // Also, if invalid security credentials are used to execute this method, the\n // client is not able to distinguish between bucket permission errors and\n@@ -84,10 +85,10 @@ public S3BlobStore(Settings settings, AmazonS3 client, String bucket, @Nullable\n try {\n if (!client.doesBucketExist(bucket)) {\n CreateBucketRequest request = null;\n- if (region != null) {\n- request = new CreateBucketRequest(bucket, region);\n- } else {\n+ if (region == null || Regions.US_EAST_1.getName().equals(region)) {\n request = new CreateBucketRequest(bucket);\n+ } else {\n+ request = new CreateBucketRequest(bucket, region);\n }\n request.setCannedAcl(this.cannedACL);\n client.createBucket(request);", "filename": "plugins/repository-s3/src/main/java/org/elasticsearch/cloud/aws/blobstore/S3BlobStore.java", "status": "modified" } ] }
{ "body": "To reproduce, we start with an index with default settings containing two records in different shards and a query that boosts one of the fields:\n\n```\nDELETE test\n\nPUT test/doc/1\n{\n \"title\": \"bar\",\n \"body\": \"foo\"\n}\n\nPUT test/doc/2\n{\n \"title\": \"foo\",\n \"body\": \"bar\"\n}\n\nGET test/doc/_search\n{\n \"query\": {\n \"multi_match\": {\n \"query\": \"foo\",\n \"fields\": [\"title^100\", \"body\"]\n }\n }\n}\n```\n\nWe get back a reasonable result: \n\n```\n{\n \"took\" : 98,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"tests\",\n \"_type\" : \"test\",\n \"_id\" : \"2\",\n \"_score\" : 1.0,\n \"_source\" : {\n \"title\" : \"foo\",\n \"body\" : \"bar\"\n }\n }, {\n \"_index\" : \"tests\",\n \"_type\" : \"test\",\n \"_id\" : \"1\",\n \"_score\" : 0.0059061614,\n \"_source\" : {\n \"title\" : \"bar\",\n \"body\" : \"foo\"\n }\n } ]\n }\n}\n```\n\nNow, add the `fuzziness` parameter to the query (the value doesn't matter as long as it doesn't interfere with matches):\n\n```\nGET test/doc/_search\n{\n \"query\": {\n \"multi_match\": {\n \"query\": \"foo\",\n \"fields\": [\"title^100\", \"body\"],\n \"fuzziness\": 0\n }\n }\n}\n```\n\nthe result is both records are scored the same:\n\n```\n{\n \"took\" : 17,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"tests\",\n \"_type\" : \"test\",\n \"_id\" : \"2\",\n \"_score\" : 1.0,\n \"_source\" : {\n \"title\" : \"foo\",\n \"body\" : \"bar\"\n }\n }, {\n \"_index\" : \"tests\",\n \"_type\" : \"test\",\n \"_id\" : \"1\",\n \"_score\" : 1.0,\n \"_source\" : {\n \"title\" : \"bar\",\n \"body\" : \"foo\"\n }\n } ]\n }\n}\n```\n\nHowever, the same experiment repeated on a single shard index produces reasonable results. It looks like if the fuzziness parameter is present, queryNorm is correctly applied only on shards where the boosted field matched at least one record.\n\nI was able to reproduce the issue on both 2.3.3 and the latest master.\n\nThe issue was originally reported on the [discussion forum](https://discuss.elastic.co/t/ne-rabotaet-boost-v-zaprosah/51648).\n", "comments": [ { "body": "I thought that this might be a query norm thing which might change with BM25, but I'm seeing the same on master. @cbuescher is this something we're doing when rewriting queries, or is this lucene level?\n", "created_at": "2016-06-03T11:10:33Z" }, { "body": "@clintongormley yes, in my experiments BM25 was affecting only the numeric value of the score but not dropping queryNorms. The strange thing is that the problem only affects shards where the boosted field doesn't match anything, which makes it look like some sort of shortcut in execution. My first thought was, maybe it's somehow related to two-phase execution. However, I was able to reproduce it on v0.90.7, v1.7.5, v2.3.3 and the current master. So it must be something else.\n", "created_at": "2016-06-03T13:33:14Z" }, { "body": "From looking at how we create the lucene queries, I can't see anything obvious where we might ignore the boost when fuziness is set, but I will have a closer look at it. Most of the query creation happens in MatchQuery/MultiMatchQuery but I'm not too familiar with that code yet, but I will take a look at this starting from your examples.\n", "created_at": "2016-06-03T13:37:18Z" }, { "body": "@imotov yes, I also suspect something on the execution level of the fuzzy query. If I use one or two shards for the minimal example above, the scores for both docs are different. As soon as a third shard gets added, the scores for the query with the added \"fuzziness\" are the same.\n", "created_at": "2016-06-03T14:04:23Z" }, { "body": "@cbuescher yes, the 2 shard case is the same as 1 shard case, because when you have 2 shards, both records will be on the shard 1, so you are basically searching a single shard. \n", "created_at": "2016-06-03T14:37:16Z" }, { "body": "I wrote a test to see how the lucene query using the `fuzziness = 0` parameter gets rewritten. If the two docs reside on two different shards, the original fuzzy query `(body:foo~0 | (title:foo~0)^100.0)` gets rewritten to `(() | (title:foo)^100.0)` on one shard and to `(body:foo | ()^100.0)` on the other where no doc has the `title`field. In that case both doc scores are the same. \nIn the case the docs are on the same shard, the query gets rewritten to (body:foo | (title:foo)^100.0) and doc2 scores higher, as expected. At this point I would like to know how the expectations for rewrites and scoring for these two cases are in theory on the lucene level. Maybe @jpountz or @jimferenczi have an idea where to look next?\n", "created_at": "2016-06-13T09:11:06Z" }, { "body": "This 2 queries: `(() | (title:foo)^100.0)` and `(body:foo | ()^100.0)` are supposed to be normalized the same way but it doesn't work as expected because the clause `()^100` is ignored. It's the problem with queries that are expanded on the shard directly. The normalization factor for the first query is 10,000 (100*100) and the norm is equal to 0.01. This is why we have a score of 1 for hits that match this query (100 *0.01). \nThe normalization factor and the norm for the second query should be the same and the score should be multiplied by the norm. Though the normalization factor and the norm are set to 1 because the empty boolean query `()^100` is ignored. \nI opened a ticket on Lucene land:\nhttps://issues.apache.org/jira/browse/LUCENE-7337\n", "created_at": "2016-06-13T16:01:19Z" } ], "number": 18710, "title": "Using fuzziness parameter in multi_match query interferes with per field boosting" }
{ "body": "There was an issue with using fuzziness parameter in multi_match query that has been reported in #18710 and was fixed in Lucene 6.2 that is now used on master. In order to verify that fix and close the original issue this PR adds the test from that issue as an integration test.\n\nCloses #18710\n", "number": 20673, "review_comments": [ { "body": "I'm fine with adding a link in the javadocs but could you add a short explanation so I don't have to go read the link? When I'm scanning javadocs there is 0% chance I'll go and read the link.\n", "created_at": "2016-09-27T22:21:06Z" }, { "body": "If it is really important that these docs end up on separate shards can you add an assertion that they are on separate shards? That would explain the `true, false, builder` thing.\n", "created_at": "2016-09-27T22:22:12Z" } ], "title": "Add test for using fuzziness parameter in multi_match query" }
{ "commits": [ { "message": "Add test for using fuzziness parameter in multi_match query\n\nThere was an issue with using fuzziness parameter in multi_match query that has\nbeen reported in #18710 and was fixed in Lucene 6.2 that is now used on master.\nIn order to verify that fix and close the original issue this PR adds the test\nfrom that issue as an integration test." } ], "files": [ { "diff": "@@ -635,6 +635,43 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n assertFirstHit(searchResponse, hasId(\"ultimate1\"));\n }\n \n+ /**\n+ * Test for edge case where field level boosting is applied to field that doesn't exist on documents on\n+ * one shard. There was an issue reported in https://github.com/elastic/elasticsearch/issues/18710 where a\n+ * `multi_match` query using the fuzziness parameter with a boost on one of two fields returns the\n+ * same document score if both documents are placed on different shard. This test recreates that scenario\n+ * and checks that the returned scores are different.\n+ */\n+ public void testFuzzyFieldLevelBoosting() throws InterruptedException, ExecutionException {\n+ String idx = \"test18710\";\n+ CreateIndexRequestBuilder builder = prepareCreate(idx).setSettings(Settings.builder()\n+ .put(indexSettings())\n+ .put(SETTING_NUMBER_OF_SHARDS, 3)\n+ .put(SETTING_NUMBER_OF_REPLICAS, 0)\n+ );\n+ assertAcked(builder.addMapping(\"type\", \"title\", \"type=string\", \"body\", \"type=string\"));\n+ ensureGreen();\n+ List<IndexRequestBuilder> builders = new ArrayList<>();\n+ builders.add(client().prepareIndex(idx, \"type\", \"1\").setSource(\n+ \"title\", \"foo\",\n+ \"body\", \"bar\"));\n+ builders.add(client().prepareIndex(idx, \"type\", \"2\").setSource(\n+ \"title\", \"bar\",\n+ \"body\", \"foo\"));\n+ indexRandom(true, false, builders);\n+\n+ SearchResponse searchResponse = client().prepareSearch(idx)\n+ .setExplain(true)\n+ .setQuery(multiMatchQuery(\"foo\").field(\"title\", 100).field(\"body\")\n+ .fuzziness(0)\n+ ).get();\n+ SearchHit[] hits = searchResponse.getHits().getHits();\n+ assertNotEquals(\"both documents should be on different shards\", hits[0].getShard().getShardId(), hits[1].getShard().getShardId());\n+ assertEquals(\"1\", hits[0].getId());\n+ assertEquals(\"2\", hits[1].getId());\n+ assertThat(hits[0].getScore(), greaterThan(hits[1].score()));\n+ }\n+\n private static void assertEquivalent(String query, SearchResponse left, SearchResponse right) {\n assertNoFailures(left);\n assertNoFailures(right);", "filename": "core/src/test/java/org/elasticsearch/search/query/MultiMatchQueryIT.java", "status": "modified" } ] }
{ "body": "Using any non reserved string starting with an underscore results in an `IllegalArgumentException`, maybe we should add a more useful error message here like `unknown request` instead of `no feature for name`. The 4xx HTTP is fine already.\n\n```\n# curl 'localhost:9200/_siohgjoidfhjfihfg?pretty'\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"No feature for name [_siohgjoidfhjfihfg]\"\n } ],\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"No feature for name [_siohgjoidfhjfihfg]\"\n },\n \"status\" : 400\n}\n```\n", "comments": [ { "body": "This happens because such a request gets interpreted as a get index request with no index nor type, and the expression that starts with '_' is seen as a potential feature to filter out the output (e.g. `_aliases` etc.). This is odd, I agree we should fix it.\n", "created_at": "2015-05-04T07:51:40Z" }, { "body": "at least you can read the error now :)\n", "created_at": "2015-05-04T08:00:16Z" }, { "body": "> at least you can read the error now :)\n\nright, that's some beautiful structured output :)\n", "created_at": "2015-05-04T08:02:08Z" } ], "number": 10946, "title": "REST: Calling `_anything` returns confusing error message" }
{ "body": "It currently returns something like:\n\n```\n\"No feature for name [_siohgjoidfhjfihfg]\"\n```\n\nWhich is not the most understandable message, this changes it to be a\nlittle more readable.\n\nResolves #10946\n", "number": 20671, "review_comments": [], "title": "Clean up confusing error message on unhandled endpoint" }
{ "commits": [ { "message": "Clean up confusing error message on unhandled endpoint\n\nIt currently returns something like:\n\n```\n\"No feature for name [_siohgjoidfhjfihfg]\"\n```\n\nWhich is not the most understandable message, this changes it to be a\nlittle more readable.\n\nResolves #10946" } ], "files": [ { "diff": "@@ -77,7 +77,7 @@ public static Feature fromName(String name) {\n return feature;\n }\n }\n- throw new IllegalArgumentException(\"No feature for name [\" + name + \"]\");\n+ throw new IllegalArgumentException(\"No endpoint or operation is available at [\" + name + \"]\");\n }\n \n public static Feature fromId(byte id) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexRequest.java", "status": "modified" } ] }
{ "body": "We were swallowing the original exception when creating a client with bad credentials.\nSo even in `TRACE` log level, nothing useful were coming out of it.\nWith this commit, it now prints:\n\n```\n[2016-09-27 15:54:13,118][ERROR][cloud.azure.storage ] [node_s0] can not create azure storage client: Storage Key is not a valid base64 encoded string.\n```\n\nCloses #20633.\n\nThis PR applies to 2.4 branch. When accepted, I'll also apply it to master / 5.x branch (and 5.0? cc @clintongormley)\n", "comments": [ { "body": "Nice!\n", "created_at": "2016-09-27T14:31:28Z" } ], "number": 20669, "title": "Fix logger when you can not create an azure storage client" }
{ "body": "We were swallowing the original exception when creating a client with bad credentials.\nSo even in `TRACE` log level, nothing useful were coming out of it.\nWith this commit, it now prints:\n\n```\n[2016-09-27 15:54:13,118][ERROR][cloud.azure.storage ] [node_s0] can not create azure storage client: Storage Key is not a valid base64 encoded string.\n```\n\nCloses #20633.\n\nBackport of #20669 for master branch (6.0)\nWill be backported to 5.x branch at least.\n", "number": 20670, "review_comments": [], "title": "Fix logger when you can not create an azure storage client" }
{ "commits": [ { "message": "Fix logger when you can not create an azure storage client\n\nWe were swallowing the original exception when creating a client with bad credentials.\nSo even in `TRACE` log level, nothing useful were coming out of it.\nWith this commit, it now prints:\n\n```\n[2016-09-27 15:54:13,118][ERROR][cloud.azure.storage ] [node_s0] can not create azure storage client: Storage Key is not a valid base64 encoded string.\n```\n\nCloses #20633.\n\nBackport of #20669 for master branch (6.0)" } ], "files": [ { "diff": "@@ -175,7 +175,7 @@ public void createContainer(String account, LocationMode mode, String container)\n blobContainer.createIfNotExists();\n } catch (IllegalArgumentException e) {\n logger.trace((Supplier<?>) () -> new ParameterizedMessage(\"fails creating container [{}]\", container), e);\n- throw new RepositoryException(container, e.getMessage());\n+ throw new RepositoryException(container, e.getMessage(), e);\n }\n }\n ", "filename": "plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceImpl.java", "status": "modified" } ] }
{ "body": "When using index name date math expressions `<logstash-{now/M}>`, the Get API works and is able to resolve the index. However, if you try to get the same value using the MultiGet API it fails as it treats the index name as a concrete index name rather than allowing for the possibility that it could be an expression. I think the behavior of the APIs should be consistent.\n\nI tend to question the usefulness of the date math expression in a get request. I could see one scenario where you know the id of a document (the id is consistent) and want to get the \"current\" one using a date math expression.\n", "comments": [ { "body": "> I tend to question the usefulness of the date math expression in a get request. \n\nAgreed\n\n> I could see one scenario where you know the id of a document (the id is consistent) and want to get the \"current\" one using a date math expression.\n\nThat's very much an edge case. On the other hand, support date math expressions everywhere doesn't cost much... That said, I think I'm leaning more towards \"why would you do that?\"\n", "created_at": "2016-04-26T11:10:18Z" }, { "body": "Looks like there is a bug in multiget, which may impact alias lookup too - let's just make multiget consistent with get (and fix the bug)\n", "created_at": "2016-05-06T10:03:09Z" } ], "number": 17957, "title": "Get and MultiGet inconsistency with date math expressions" }
{ "body": "Fixed date math expression support in multi get requests.\nDate math index/alias expressions in `mget` will now be resolved to a concrete single index instead of failing the mget item with a `IndexNotFoundException`.\nNote `mget` uses the same `IndicesOptions` as `get`; which prevents the expression from resolving to multiple indices (i.e. wilcards).\nAdded integration test to verify multi index aliases do not fail the entire mget request.\n\nCloses #17957\n", "number": 20659, "review_comments": [ { "body": "I think catching index not found only is not enough. Other exceptions can be thrown while resolving indices. If you change this to `Exception`, you would also fix #20845 I think. Would you mind doing that and adding a small test for that to `SimpleMgetIT` ?\n", "created_at": "2016-10-10T20:39:27Z" }, { "body": "as mentioned below, this comment is not 100% correct. Other exceptions could be thrown. I am thinking we could even drop the comment, not sure what value it adds. Thoughts?\n", "created_at": "2016-10-10T20:41:08Z" }, { "body": "Agreed. Good catch with the multi index aliases. Another possible exception is `ElasticsearchParseException` when parsing the date math expression.\n", "created_at": "2016-10-12T04:48:02Z" }, { "body": "Agreed. Other exceptions are possible. Will remove the comment; everything should be documented in `IndexNameExpressionResolver`. Looks like it can throw at least these exceptions: `IndexNotFoundException`, `IndexClosedException`, `ElasticsearchParseException`, `IllegalArgumentException`. I noticed most of the public methods do not mention all of these; should I update the documentation in this PR?\n", "created_at": "2016-10-12T05:00:38Z" }, { "body": "can you move this block back within the try catch please? `resolveIndexRouting` may throw exceptions too, and we don't want those to fail the whole request. Maybe we can then just throw exception from within the if. See https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java#L414 . Maybe we should write a test for this too :) (if you are up for it).\n", "created_at": "2016-10-12T09:20:12Z" }, { "body": "Can you use `.get()` instead of `.execute().actionGet();`?\n", "created_at": "2016-10-12T09:20:14Z" }, { "body": "I was looking at that code as well, but I don't think it will reach that line. Since the `aliasOrIndex` parameter can only be a concrete index the code should return on this line? https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java#L405\n\nI can move that code into the try catch if we want to be defensive. Perhaps its better to move everything in that for loop into the try catch?\n\nAnother question: is it possible for an index to be deleted (or some other cluster change) while the code is inside that loop? For example when `indexNameExpressionResolver.concreteSingleIndex` is called the index exists, but when it gets down to `ShardId shardId =....` the index has been deleted?\nThanks\n", "created_at": "2016-10-13T04:34:14Z" }, { "body": "I wouldn't move everything within the catch, just the part that you moved out as part of this PR. We use the same cluster state throughout the execution of the whole multi get transport action, so deleted indices once the cluster state has been retrieved won't be seen by it. What will happen is that the shard action will fail as the shard is not there anymore, hence that specific get item will fail.\n", "created_at": "2016-10-13T13:34:26Z" }, { "body": "Done.\n", "created_at": "2016-10-14T05:00:34Z" }, { "body": "Got it, just pushed the change. I think that exception in the routing code is checking for a multi index alias, so that should be covered in the test `SimpleMgetIT#testThatMgetShouldWorkWithMultiIndexAlias`. However I'm pretty sure it wont actually reach that code since `indexNameExpressionResolver` will throw an exception before.\nThanks for the review!\n", "created_at": "2016-10-14T05:10:05Z" } ], "title": "Fixed date math expression support in multi get requests." }
{ "commits": [ { "message": "Fixed date math expression support in multi get requests.\nDate math index/alias expressions in mget will now be resolved to a concrete single index instead of failing the mget item with a IndexNotFoundException.\nNote mget uses the same IndicesOptions as get; which prevents the expression from resolving to multiple indices (i.e. wilcards).\nAdded integration test to verify multi index aliases do not fail the entire mget request.\n\nCloses #17957" }, { "message": "Moved mget item routing code back into the try catch.\nUpdated unit test to use .get() instead of .execute().actionGet()." } ], "files": [ { "diff": "@@ -64,16 +64,11 @@ protected void doExecute(final MultiGetRequest request, final ActionListener<Mul\n for (int i = 0; i < request.items.size(); i++) {\n MultiGetRequest.Item item = request.items.get(i);\n \n- if (!clusterState.metaData().hasConcreteIndex(item.index())) {\n- responses.set(i, newItemFailure(item.index(), item.type(), item.id(), new IndexNotFoundException(item.index())));\n- continue;\n- }\n-\n String concreteSingleIndex;\n try {\n- item.routing(clusterState.metaData().resolveIndexRouting(item.parent(), item.routing(), item.index()));\n concreteSingleIndex = indexNameExpressionResolver.concreteSingleIndex(clusterState, item).getName();\n \n+ item.routing(clusterState.metaData().resolveIndexRouting(item.parent(), item.routing(), concreteSingleIndex));\n if ((item.routing() == null) && (clusterState.getMetaData().routingRequired(concreteSingleIndex, item.type()))) {\n String message = \"routing is required for [\" + concreteSingleIndex + \"]/[\" + item.type() + \"]/[\" + item.id() + \"]\";\n responses.set(i, newItemFailure(concreteSingleIndex, item.type(), item.id(), new IllegalArgumentException(message)));", "filename": "core/src/main/java/org/elasticsearch/action/get/TransportMultiGetAction.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n import org.elasticsearch.action.delete.DeleteResponse;\n import org.elasticsearch.action.get.GetResponse;\n+import org.elasticsearch.action.get.MultiGetResponse;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -77,6 +78,17 @@ public void testIndexNameDateMathExpressions() {\n assertThat(getResponse.isExists(), is(true));\n assertThat(getResponse.getId(), equalTo(\"3\"));\n \n+ MultiGetResponse mgetResponse = client().prepareMultiGet()\n+ .add(dateMathExp1, \"type\", \"1\")\n+ .add(dateMathExp2, \"type\", \"2\")\n+ .add(dateMathExp3, \"type\", \"3\").get();\n+ assertThat(mgetResponse.getResponses()[0].getResponse().isExists(), is(true));\n+ assertThat(mgetResponse.getResponses()[0].getResponse().getId(), equalTo(\"1\"));\n+ assertThat(mgetResponse.getResponses()[1].getResponse().isExists(), is(true));\n+ assertThat(mgetResponse.getResponses()[1].getResponse().getId(), equalTo(\"2\"));\n+ assertThat(mgetResponse.getResponses()[2].getResponse().isExists(), is(true));\n+ assertThat(mgetResponse.getResponses()[2].getResponse().getId(), equalTo(\"3\"));\n+\n IndicesStatsResponse indicesStatsResponse = client().admin().indices().prepareStats(dateMathExp1, dateMathExp2, dateMathExp3).get();\n assertThat(indicesStatsResponse.getIndex(index1), notNullValue());\n assertThat(indicesStatsResponse.getIndex(index2), notNullValue());", "filename": "core/src/test/java/org/elasticsearch/indices/DateMathIndexExpressionsIntegrationIT.java", "status": "modified" }, { "diff": "@@ -36,12 +36,14 @@\n import static org.elasticsearch.action.support.WriteRequest.RefreshPolicy.IMMEDIATE;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.hasKey;\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.nullValue;\n \n public class SimpleMgetIT extends ESIntegTestCase {\n+\n public void testThatMgetShouldWorkWithOneIndexMissing() throws IOException {\n createIndex(\"test\");\n \n@@ -51,7 +53,7 @@ public void testThatMgetShouldWorkWithOneIndexMissing() throws IOException {\n MultiGetResponse mgetResponse = client().prepareMultiGet()\n .add(new MultiGetRequest.Item(\"test\", \"test\", \"1\"))\n .add(new MultiGetRequest.Item(\"nonExistingIndex\", \"test\", \"1\"))\n- .execute().actionGet();\n+ .get();\n assertThat(mgetResponse.getResponses().length, is(2));\n \n assertThat(mgetResponse.getResponses()[0].getIndex(), is(\"test\"));\n@@ -63,18 +65,44 @@ public void testThatMgetShouldWorkWithOneIndexMissing() throws IOException {\n assertThat(((ElasticsearchException) mgetResponse.getResponses()[1].getFailure().getFailure()).getIndex().getName(),\n is(\"nonExistingIndex\"));\n \n-\n mgetResponse = client().prepareMultiGet()\n .add(new MultiGetRequest.Item(\"nonExistingIndex\", \"test\", \"1\"))\n- .execute().actionGet();\n+ .get();\n assertThat(mgetResponse.getResponses().length, is(1));\n assertThat(mgetResponse.getResponses()[0].getIndex(), is(\"nonExistingIndex\"));\n assertThat(mgetResponse.getResponses()[0].isFailed(), is(true));\n assertThat(mgetResponse.getResponses()[0].getFailure().getMessage(), is(\"no such index\"));\n assertThat(((ElasticsearchException) mgetResponse.getResponses()[0].getFailure().getFailure()).getIndex().getName(),\n is(\"nonExistingIndex\"));\n+ }\n+\n+ public void testThatMgetShouldWorkWithMultiIndexAlias() throws IOException {\n+ assertAcked(prepareCreate(\"test\").addAlias(new Alias(\"multiIndexAlias\")));\n+ assertAcked(prepareCreate(\"test2\").addAlias(new Alias(\"multiIndexAlias\")));\n+\n+ client().prepareIndex(\"test\", \"test\", \"1\").setSource(jsonBuilder().startObject().field(\"foo\", \"bar\").endObject())\n+ .setRefreshPolicy(IMMEDIATE).get();\n+\n+ MultiGetResponse mgetResponse = client().prepareMultiGet()\n+ .add(new MultiGetRequest.Item(\"test\", \"test\", \"1\"))\n+ .add(new MultiGetRequest.Item(\"multiIndexAlias\", \"test\", \"1\"))\n+ .get();\n+ assertThat(mgetResponse.getResponses().length, is(2));\n \n+ assertThat(mgetResponse.getResponses()[0].getIndex(), is(\"test\"));\n+ assertThat(mgetResponse.getResponses()[0].isFailed(), is(false));\n+\n+ assertThat(mgetResponse.getResponses()[1].getIndex(), is(\"multiIndexAlias\"));\n+ assertThat(mgetResponse.getResponses()[1].isFailed(), is(true));\n+ assertThat(mgetResponse.getResponses()[1].getFailure().getMessage(), containsString(\"more than one indices\"));\n \n+ mgetResponse = client().prepareMultiGet()\n+ .add(new MultiGetRequest.Item(\"multiIndexAlias\", \"test\", \"1\"))\n+ .get();\n+ assertThat(mgetResponse.getResponses().length, is(1));\n+ assertThat(mgetResponse.getResponses()[0].getIndex(), is(\"multiIndexAlias\"));\n+ assertThat(mgetResponse.getResponses()[0].isFailed(), is(true));\n+ assertThat(mgetResponse.getResponses()[0].getFailure().getMessage(), containsString(\"more than one indices\"));\n }\n \n public void testThatParentPerDocumentIsSupported() throws Exception {\n@@ -95,7 +123,7 @@ public void testThatParentPerDocumentIsSupported() throws Exception {\n MultiGetResponse mgetResponse = client().prepareMultiGet()\n .add(new MultiGetRequest.Item(indexOrAlias(), \"test\", \"1\").parent(\"4\"))\n .add(new MultiGetRequest.Item(indexOrAlias(), \"test\", \"1\"))\n- .execute().actionGet();\n+ .get();\n \n assertThat(mgetResponse.getResponses().length, is(2));\n assertThat(mgetResponse.getResponses()[0].isFailed(), is(false));\n@@ -163,7 +191,7 @@ public void testThatRoutingPerDocumentIsSupported() throws Exception {\n MultiGetResponse mgetResponse = client().prepareMultiGet()\n .add(new MultiGetRequest.Item(indexOrAlias(), \"test\", id).routing(routingOtherShard))\n .add(new MultiGetRequest.Item(indexOrAlias(), \"test\", id))\n- .execute().actionGet();\n+ .get();\n \n assertThat(mgetResponse.getResponses().length, is(2));\n assertThat(mgetResponse.getResponses()[0].isFailed(), is(false));", "filename": "core/src/test/java/org/elasticsearch/mget/SimpleMgetIT.java", "status": "modified" } ] }
{ "body": "This commit changes the default behavior of `_flush` to block if other flushes are ongoing.\nThis also removes the use of `FlushNotAllowedException` and instead simply return immediately\nby skipping the flush. Users should be aware if they set this option that the flush might or might\nnot flush everything to disk ie. no transactional behavior of some sort.\n\nCloses #20569\n", "comments": [ { "body": "LGTM. Thx @s1monw !\n", "created_at": "2016-09-21T09:36:15Z" }, { "body": "FYI - I will open a sep issue for the migration guide since this goes in to master and we only need that entry on 5.x and 5.0\n", "created_at": "2016-09-21T09:52:26Z" } ], "number": 20597, "title": "`_flush` should block by default" }
{ "body": "This is a issue in all 2.x releases that if we run into a FlushNotAllowedEngineException\non a replica (ie. a flush is already running) we fail the replica. We should just ignore this\nexception and not fail the shard.\n\nNote: this is against 2.x only. Master changed in #20597\nRelates to #20569\n", "number": 20632, "review_comments": [ { "body": "can we assert that ignoreReplicaException is false? it will be weird to have something fail a shard but not reported.\n", "created_at": "2016-09-22T14:49:56Z" } ], "title": "Don't fail replica if FlushNotAllowedEngineException is thrown" }
{ "commits": [ { "message": "Don't fail replica if FlushNotAllowedEngineException is thrown\n\nThis is a issue in all 2.x releases that if we run into a FlushNotAllowedEngineException\non a replica (ie. a flush is already running) we fail the replica. We should just ignore this\nexcepiton and not fail the shard.\n\nNote: this is against 2.x only. Master changed in #20597\nRelates to #20569" }, { "message": "Ensure FlushNotAllowedEngineException don't fail shard but still get reported to the user" }, { "message": "also make sure we respect mustFailReplica on the primary" }, { "message": "Add assertions and a test that this really fixed the issue" } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.flush;\n \n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionWriteResponse;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.replication.TransportReplicationAction;\n@@ -31,6 +32,7 @@\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.engine.FlushNotAllowedEngineException;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -91,4 +93,15 @@ protected ClusterBlockLevel indexBlockLevel() {\n protected boolean shouldExecuteReplication(Settings settings) {\n return true;\n }\n+\n+ @Override\n+ protected boolean mustFailReplica(Throwable e) {\n+ // if we are running flush ith wait_if_ongoing=false (default) we might get a FlushNotAllowedEngineException from the\n+ // replica that is a signal that there is another flush ongoing and we stepped out. This behavior has changed in 5.x\n+ // where we don't throw an exception anymore. In such a case we ignore the exception an do NOT fail the replica.\n+ if (ExceptionsHelper.unwrapCause(e).getClass() == FlushNotAllowedEngineException.class) {\n+ return false;\n+ }\n+ return super.mustFailReplica(e);\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java", "status": "modified" }, { "diff": "@@ -205,6 +205,14 @@ protected boolean ignoreReplicaException(Throwable e) {\n return false;\n }\n \n+ /**\n+ * Returns <code>true</code> iff the replica must be failed if it threw the given exception.\n+ * This defaults to the inverse of {@link #ignoreReplicaException(Throwable)}\n+ */\n+ protected boolean mustFailReplica(Throwable e) {\n+ return ignoreReplicaException(e) == false;\n+ }\n+\n protected boolean isConflictException(Throwable e) {\n Throwable cause = ExceptionsHelper.unwrapCause(e);\n // on version conflict or document missing, it means\n@@ -360,7 +368,8 @@ private void failReplicaIfNeeded(Throwable t) {\n String index = request.shardId().getIndex();\n int shardId = request.shardId().id();\n logger.trace(\"failure on replica [{}][{}], action [{}], request [{}]\", t, index, shardId, actionName, request);\n- if (ignoreReplicaException(t) == false) {\n+ if (mustFailReplica(t)) {\n+ assert ignoreReplicaException(t) == false;\n IndexService indexService = indicesService.indexService(index);\n if (indexService == null) {\n logger.debug(\"ignoring failed replica [{}][{}] because index was already removed.\", index, shardId);\n@@ -927,7 +936,8 @@ public void handleResponse(TransportResponse.Empty vResponse) {\n public void handleException(TransportException exp) {\n onReplicaFailure(nodeId, exp);\n logger.trace(\"[{}] transport failure during replica request [{}], action [{}]\", exp, node, replicaRequest, transportReplicaAction);\n- if (ignoreReplicaException(exp) == false) {\n+ if (mustFailReplica(exp)) {\n+ assert ignoreReplicaException(exp) == false;\n logger.warn(\"{} failed to perform {} on node {}\", exp, shardId, transportReplicaAction, node);\n shardStateAction.shardFailed(shard, indexUUID, \"failed to perform \" + actionName + \" on replica on node \" + node, exp);\n }", "filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java", "status": "modified" }, { "diff": "@@ -19,17 +19,25 @@\n package org.elasticsearch.indices.flush;\n \n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n import org.elasticsearch.action.admin.indices.flush.SyncedFlushResponse;\n import org.elasticsearch.action.admin.indices.stats.IndexStats;\n import org.elasticsearch.action.admin.indices.stats.ShardStats;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.IndexRoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n@@ -53,6 +61,7 @@ public class FlushIT extends ESIntegTestCase {\n public void testWaitIfOngoing() throws InterruptedException {\n createIndex(\"test\");\n ensureGreen(\"test\");\n+ ClusterStateResponse beforeTestResponse = client().admin().cluster().prepareState().get();\n final int numIters = scaledRandomIntBetween(10, 30);\n for (int i = 0; i < numIters; i++) {\n for (int j = 0; j < 10; j++) {\n@@ -84,6 +93,84 @@ public void onFailure(Throwable e) {\n latch.await();\n assertThat(errors, emptyIterable());\n }\n+ ClusterStateResponse afterTestResponse = client().admin().cluster().prepareState().get();\n+ IndexRoutingTable afterRoutingTable = afterTestResponse.getState().getRoutingTable().index(\"test\");\n+ IndexRoutingTable beforeRoutingTable = beforeTestResponse.getState().getRoutingTable().index(\"test\");\n+ assertEquals(afterRoutingTable, beforeRoutingTable);\n+\n+ }\n+\n+ /**\n+ * We test here that failing with FlushNotAllowedEngineException doesn't fail the shards since it's whitelisted.\n+ * see #20632\n+ * @throws InterruptedException\n+ */\n+ @Test\n+ public void testDontWaitIfOngoing() throws InterruptedException {\n+ internalCluster().ensureAtLeastNumDataNodes(2);\n+ prepareCreate(\"test\").setSettings(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1).get();\n+ ensureGreen(\"test\");\n+ ClusterStateResponse beforeTestResponse = client().admin().cluster().prepareState().get();\n+ List<ShardRouting> shardRoutings = beforeTestResponse.getState().getRoutingTable().index(\"test\")\n+ .shardsWithState(ShardRoutingState.STARTED);\n+ ShardRouting theReplica = null;\n+ for (ShardRouting shardRouting : shardRoutings) {\n+ if (shardRouting.primary() == false) {\n+ theReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(theReplica);\n+ DiscoveryNode discoveryNode = beforeTestResponse.getState().nodes().get(theReplica.currentNodeId());\n+ final IndicesService instance = internalCluster().getInstance(IndicesService.class, discoveryNode.getName());\n+ final ShardRouting routing = theReplica;\n+ final AtomicBoolean run = new AtomicBoolean(true);\n+ Thread t = new Thread() {\n+ @Override\n+ public void run() {\n+ IndexService indexService = instance.indexService(routing.index());\n+ IndexShard shard = indexService.shard(routing.id());\n+ while(run.get()) {\n+ shard.flush(new FlushRequest().waitIfOngoing(true));\n+ }\n+ }\n+ };\n+ t.start();\n+ final int numIters = scaledRandomIntBetween(10, 30);\n+ for (int i = 0; i < numIters; i++) {\n+ for (int j = 0; j < 10; j++) {\n+ client().prepareIndex(\"test\", \"test\").setSource(\"{}\").get();\n+ }\n+ final CountDownLatch latch = new CountDownLatch(10);\n+ final CopyOnWriteArrayList<Throwable> errors = new CopyOnWriteArrayList<>();\n+ for (int j = 0; j < 10; j++) {\n+ client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(false).execute(new ActionListener<FlushResponse>() {\n+ @Override\n+ public void onResponse(FlushResponse flushResponse) {\n+ try {\n+ latch.countDown();\n+ } catch (Throwable ex) {\n+ onFailure(ex);\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ errors.add(e);\n+ latch.countDown();\n+ }\n+ });\n+ }\n+ latch.await();\n+ assertThat(errors, emptyIterable());\n+ }\n+ run.set(false);\n+ t.join();\n+ ClusterStateResponse afterTestResponse = client().admin().cluster().prepareState().get();\n+ IndexRoutingTable afterRoutingTable = afterTestResponse.getState().getRoutingTable().index(\"test\");\n+ IndexRoutingTable beforeRoutingTable = beforeTestResponse.getState().getRoutingTable().index(\"test\");\n+ assertEquals(afterRoutingTable, beforeRoutingTable);\n+\n }\n \n public void testSyncedFlush() throws ExecutionException, InterruptedException, IOException {", "filename": "core/src/test/java/org/elasticsearch/indices/flush/FlushIT.java", "status": "modified" } ] }
{ "body": "Today we use TransportReplicationAction to flush a shard. Yet, this is problematic\nsince a shard might throw an exception that is just fine to catch and continue and should never\nresult in failing a shard. If a fatal exception happens that should fail the shard this happen on\nthe engine level. In contrast to write actions we should never fail a shards in such a situation.\n\nThis is a blocker for me since `_flush` should never fail a shard unless the engine decides to. \n", "comments": [ { "body": "apparently this is also in 2.x \n", "created_at": "2016-09-19T20:01:15Z" }, { "body": "There is a couple of things at work here.\n\nMost importantly, a flush request on a shard now may throw a `FlushNotAllowedEngineException` if the shard is already flushing. This will cause unneeded shard failing as the primary will fail the replica (as is the standard logic for TRA).\n\nSecond, the reason flush is a TRA is because we have seen it in the past as a \"better refresh\" and moved it to TRA when we moved refresh to inherit from TRA. Refresh needs to be part of the TRA logic in order to honor the sequence `write, refresh, search` and guarantee that the docs in the write are returned.\n\nThe logic in TRA is such that any operation that failed on a replica, doesn't matter why, means that the replica is not a good copy of the current state of the replication group. Normally this means it misses some documents but, with the current hard semantics of refresh, failing to refresh quanitfies to the same thing.\n\nIMO we can choose to do any of the following:\n\n1) Reduce the chance of flush hitting exceptions on replicas by:\n 1a) catching the `FlushNotAllowedEngineException` in `TransportFlushAction` and ignoring it (we should change the docs of the flush action here)\n 1b) remove the `wait_if_ongoing` flag from FlushRequest and make it always wait.\n2) Change the semantics of `flush` to not be a superset of refresh and either move it from TRA or relax TRA's requirements for it.\n3) If needed, relax the requirement of `refresh` & `flush`.\n\nI'm in favor of 1b for 5.0. 2.x is better served with 1a .\n", "created_at": "2016-09-19T20:23:15Z" }, { "body": "I don't understand what guarantees you are talking about. The `_flush` doesn't have any visibility guarantees. It's only flush. We just don't need to fail the replica on flush? We can then just let exception reporting happen. I am also leaning towards always block but I think failing a replica on flush it the engines job and should not happen on such a high level\n", "created_at": "2016-09-19T20:28:39Z" }, { "body": "> Change the semantics of flush to not be a superset of refresh and either move it from TRA or relax TRA's requirements for it.\n\nwhy is flush a superset of refresh? can you explain?\n", "created_at": "2016-09-20T07:36:34Z" }, { "body": "> why is flush a superset of refresh? can you explain?\n\nThat's a good question. If I remember correctly, the argument was that if you see flush as \"take everything you have and make lucene segments out of it\" it also implies making it visible for searchers. If see it as just \"persist data and trim translog\" it doesn't have to imply refresh imo. I'm good with both.\n", "created_at": "2016-09-20T07:43:44Z" }, { "body": "> it also implies making it visible for searchers. \n\nthis is not true, you have to call `_refresh` to make it visible. \n\nRefresh:\n- write all segments to disk\n- reopen the lucene index reader\n- swap the new reader in\n\nFlush:\n- write all segments to disk\n- fsync them\n\nbut even if we'd be a super set I'd argue that if a refresh fails it's the engines job to decide if the exception was fatal (btw. any exception in InternalEngine#refresh()) is fatal. So in TRA the only think we should do is either report to the user or retry (we know exactly when we can retry) the shard failure is the engines job. The only exception here is `TransportWriteAction` since it can make assumptions of consistency.\n", "created_at": "2016-09-20T07:48:28Z" } ], "number": 20569, "title": "`_flush` fails the replica shards in the case of an uncaught exception" }
{ "body": "This is a issue in all 2.x releases that if we run into a FlushNotAllowedEngineException\non a replica (ie. a flush is already running) we fail the replica. We should just ignore this\nexception and not fail the shard.\n\nNote: this is against 2.x only. Master changed in #20597\nRelates to #20569\n", "number": 20632, "review_comments": [ { "body": "can we assert that ignoreReplicaException is false? it will be weird to have something fail a shard but not reported.\n", "created_at": "2016-09-22T14:49:56Z" } ], "title": "Don't fail replica if FlushNotAllowedEngineException is thrown" }
{ "commits": [ { "message": "Don't fail replica if FlushNotAllowedEngineException is thrown\n\nThis is a issue in all 2.x releases that if we run into a FlushNotAllowedEngineException\non a replica (ie. a flush is already running) we fail the replica. We should just ignore this\nexcepiton and not fail the shard.\n\nNote: this is against 2.x only. Master changed in #20597\nRelates to #20569" }, { "message": "Ensure FlushNotAllowedEngineException don't fail shard but still get reported to the user" }, { "message": "also make sure we respect mustFailReplica on the primary" }, { "message": "Add assertions and a test that this really fixed the issue" } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.flush;\n \n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionWriteResponse;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.replication.TransportReplicationAction;\n@@ -31,6 +32,7 @@\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.engine.FlushNotAllowedEngineException;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -91,4 +93,15 @@ protected ClusterBlockLevel indexBlockLevel() {\n protected boolean shouldExecuteReplication(Settings settings) {\n return true;\n }\n+\n+ @Override\n+ protected boolean mustFailReplica(Throwable e) {\n+ // if we are running flush ith wait_if_ongoing=false (default) we might get a FlushNotAllowedEngineException from the\n+ // replica that is a signal that there is another flush ongoing and we stepped out. This behavior has changed in 5.x\n+ // where we don't throw an exception anymore. In such a case we ignore the exception an do NOT fail the replica.\n+ if (ExceptionsHelper.unwrapCause(e).getClass() == FlushNotAllowedEngineException.class) {\n+ return false;\n+ }\n+ return super.mustFailReplica(e);\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java", "status": "modified" }, { "diff": "@@ -205,6 +205,14 @@ protected boolean ignoreReplicaException(Throwable e) {\n return false;\n }\n \n+ /**\n+ * Returns <code>true</code> iff the replica must be failed if it threw the given exception.\n+ * This defaults to the inverse of {@link #ignoreReplicaException(Throwable)}\n+ */\n+ protected boolean mustFailReplica(Throwable e) {\n+ return ignoreReplicaException(e) == false;\n+ }\n+\n protected boolean isConflictException(Throwable e) {\n Throwable cause = ExceptionsHelper.unwrapCause(e);\n // on version conflict or document missing, it means\n@@ -360,7 +368,8 @@ private void failReplicaIfNeeded(Throwable t) {\n String index = request.shardId().getIndex();\n int shardId = request.shardId().id();\n logger.trace(\"failure on replica [{}][{}], action [{}], request [{}]\", t, index, shardId, actionName, request);\n- if (ignoreReplicaException(t) == false) {\n+ if (mustFailReplica(t)) {\n+ assert ignoreReplicaException(t) == false;\n IndexService indexService = indicesService.indexService(index);\n if (indexService == null) {\n logger.debug(\"ignoring failed replica [{}][{}] because index was already removed.\", index, shardId);\n@@ -927,7 +936,8 @@ public void handleResponse(TransportResponse.Empty vResponse) {\n public void handleException(TransportException exp) {\n onReplicaFailure(nodeId, exp);\n logger.trace(\"[{}] transport failure during replica request [{}], action [{}]\", exp, node, replicaRequest, transportReplicaAction);\n- if (ignoreReplicaException(exp) == false) {\n+ if (mustFailReplica(exp)) {\n+ assert ignoreReplicaException(exp) == false;\n logger.warn(\"{} failed to perform {} on node {}\", exp, shardId, transportReplicaAction, node);\n shardStateAction.shardFailed(shard, indexUUID, \"failed to perform \" + actionName + \" on replica on node \" + node, exp);\n }", "filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java", "status": "modified" }, { "diff": "@@ -19,17 +19,25 @@\n package org.elasticsearch.indices.flush;\n \n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n import org.elasticsearch.action.admin.indices.flush.SyncedFlushResponse;\n import org.elasticsearch.action.admin.indices.stats.IndexStats;\n import org.elasticsearch.action.admin.indices.stats.ShardStats;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.IndexRoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n@@ -53,6 +61,7 @@ public class FlushIT extends ESIntegTestCase {\n public void testWaitIfOngoing() throws InterruptedException {\n createIndex(\"test\");\n ensureGreen(\"test\");\n+ ClusterStateResponse beforeTestResponse = client().admin().cluster().prepareState().get();\n final int numIters = scaledRandomIntBetween(10, 30);\n for (int i = 0; i < numIters; i++) {\n for (int j = 0; j < 10; j++) {\n@@ -84,6 +93,84 @@ public void onFailure(Throwable e) {\n latch.await();\n assertThat(errors, emptyIterable());\n }\n+ ClusterStateResponse afterTestResponse = client().admin().cluster().prepareState().get();\n+ IndexRoutingTable afterRoutingTable = afterTestResponse.getState().getRoutingTable().index(\"test\");\n+ IndexRoutingTable beforeRoutingTable = beforeTestResponse.getState().getRoutingTable().index(\"test\");\n+ assertEquals(afterRoutingTable, beforeRoutingTable);\n+\n+ }\n+\n+ /**\n+ * We test here that failing with FlushNotAllowedEngineException doesn't fail the shards since it's whitelisted.\n+ * see #20632\n+ * @throws InterruptedException\n+ */\n+ @Test\n+ public void testDontWaitIfOngoing() throws InterruptedException {\n+ internalCluster().ensureAtLeastNumDataNodes(2);\n+ prepareCreate(\"test\").setSettings(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1).get();\n+ ensureGreen(\"test\");\n+ ClusterStateResponse beforeTestResponse = client().admin().cluster().prepareState().get();\n+ List<ShardRouting> shardRoutings = beforeTestResponse.getState().getRoutingTable().index(\"test\")\n+ .shardsWithState(ShardRoutingState.STARTED);\n+ ShardRouting theReplica = null;\n+ for (ShardRouting shardRouting : shardRoutings) {\n+ if (shardRouting.primary() == false) {\n+ theReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(theReplica);\n+ DiscoveryNode discoveryNode = beforeTestResponse.getState().nodes().get(theReplica.currentNodeId());\n+ final IndicesService instance = internalCluster().getInstance(IndicesService.class, discoveryNode.getName());\n+ final ShardRouting routing = theReplica;\n+ final AtomicBoolean run = new AtomicBoolean(true);\n+ Thread t = new Thread() {\n+ @Override\n+ public void run() {\n+ IndexService indexService = instance.indexService(routing.index());\n+ IndexShard shard = indexService.shard(routing.id());\n+ while(run.get()) {\n+ shard.flush(new FlushRequest().waitIfOngoing(true));\n+ }\n+ }\n+ };\n+ t.start();\n+ final int numIters = scaledRandomIntBetween(10, 30);\n+ for (int i = 0; i < numIters; i++) {\n+ for (int j = 0; j < 10; j++) {\n+ client().prepareIndex(\"test\", \"test\").setSource(\"{}\").get();\n+ }\n+ final CountDownLatch latch = new CountDownLatch(10);\n+ final CopyOnWriteArrayList<Throwable> errors = new CopyOnWriteArrayList<>();\n+ for (int j = 0; j < 10; j++) {\n+ client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(false).execute(new ActionListener<FlushResponse>() {\n+ @Override\n+ public void onResponse(FlushResponse flushResponse) {\n+ try {\n+ latch.countDown();\n+ } catch (Throwable ex) {\n+ onFailure(ex);\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ errors.add(e);\n+ latch.countDown();\n+ }\n+ });\n+ }\n+ latch.await();\n+ assertThat(errors, emptyIterable());\n+ }\n+ run.set(false);\n+ t.join();\n+ ClusterStateResponse afterTestResponse = client().admin().cluster().prepareState().get();\n+ IndexRoutingTable afterRoutingTable = afterTestResponse.getState().getRoutingTable().index(\"test\");\n+ IndexRoutingTable beforeRoutingTable = beforeTestResponse.getState().getRoutingTable().index(\"test\");\n+ assertEquals(afterRoutingTable, beforeRoutingTable);\n+\n }\n \n public void testSyncedFlush() throws ExecutionException, InterruptedException, IOException {", "filename": "core/src/test/java/org/elasticsearch/indices/flush/FlushIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.4.0\n\n**Plugins installed**: [kopf]\n\n**JVM version**: 1.8.0_60\n\n**OS version**: OSX 10.11.6 locally, reproducible on CentOS 7.2.1511 as well\n\n**Description of the problem including expected versus actual behavior**:\n\nAfter upgrading to 2.4.0 our templates are no longer functioning from 2.3.3.\n\nWe have our template split up into several templates so we can specify default analyzers per language, and we share analyzers/filters/tokenizers between the templates.\n\n**Steps to reproduce**:\n1. Push a shared template with common analyzers/filters\n\n```\nPOST /_template/analyzers\n{\n \"template\": \"*\",\n \"order\": 20,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"untouched_analyzer\": {\n \"type\": \"custom\",\n \"tokenizer\": \"keyword\",\n \"filter\": [ \"max_length\", \"lowercase\" ]\n },\n \"no_stopwords_analyzer\": {\n \"type\": \"custom\",\n \"tokenizer\": \"standard\",\n \"filter\": [ \"max_length\", \"standard\", \"lowercase\" ]\n }\n },\n \"tokenizer\": {\n \"standard\": {\n \"type\": \"standard\",\n \"version\": \"4.4\"\n }\n },\n \"filter\": {\n \"max_length\": {\n \"type\": \"length\",\n \"max\": \"32766\"\n }\n }\n }\n }\n}\n```\n1. Push a language specific analyzer template, which is a `czech` analyzer referencing a filter from the `analyzers` template.\n\n```\nPOST /_template/analyzer-cs\n{\n \"template\": \"*-cs\",\n \"order\": 30,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"default\": {\n \"type\": \"czech\",\n \"filter\": [ \"max_length\" ]\n }\n }\n }\n }\n}\n```\n1. Push another language specific analyzer template which uses a custom analyzer, referencing some filters from the shared `analyzers` template.\n\n```\nPOST /_template/analyzer-en\n{\n \"template\": \"*-en\",\n \"order\": 30,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"default\": {\n \"type\": \"custom\",\n \"tokenizer\": \"standard\",\n \"filter\": [ \"max_length\", \"standard\", \"lowercase\", \"stop_en\" ]\n }\n },\n \"filter\": {\n \"stop_en\": {\n \"type\": \"stop\",\n \"stopwords\": \"_english_\",\n \"ignore_case\": \"true\"\n }\n }\n }\n }\n}\n```\n\nThis action fails, resulting in:\n\n```\n2016-09-13 09:52:00,174][DEBUG][action.admin.indices.template.put] [Zartra] failed to put template [analyzers-en]\n[kOccXLKbR5CwAsXCt0v_3w] IndexCreationException[failed to create index]; nested: IllegalArgumentException[Custom Analyzer [default] failed to find filter under name [max_length]];\n```\n\nHowever, as you see the first `analyzer-cs` was able to reference the `max_length` filter without issue.\n\nThese templates worked fine in 2.3.3 and 2.3.4, haven't tried 2.3.5 but it sounds like there weren't any changes to that release. Reading over release notes for 2.4.0 don't seem to indicate anything that would prevent this functionality from being supported. \n\nWe were seeing similar issues with referencing the shared analyzers in our mapping template file.\n\n**Provide logs (if relevant)**:\n\n```\n 2016-09-13 09:52:00,174][DEBUG][action.admin.indices.template.put] [Zartra] failed to put template [analyzers-en]\n [kOccXLKbR5CwAsXCt0v_3w] IndexCreationException[failed to create index]; nested: IllegalArgumentException[Custom Analyzer [default] failed to find filter under name [max_length]];\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:360)\n at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.validateAndAddTemplate(MetaDataIndexTemplateService.java:196)\n at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.access$200(MetaDataIndexTemplateService.java:57)\n at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService$2.execute(MetaDataIndexTemplateService.java:157)\n at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)\n at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n Caused by: java.lang.IllegalArgumentException: Custom Analyzer [default] failed to find filter under name [max_length]\n at org.elasticsearch.index.analysis.CustomAnalyzerProvider.build(CustomAnalyzerProvider.java:76)\n at org.elasticsearch.index.analysis.AnalysisService.<init>(AnalysisService.java:216)\n at org.elasticsearch.index.analysis.AnalysisService.<init>(AnalysisService.java:70)\n at sun.reflect.GeneratedConstructorAccessor6.newInstance(Unknown Source)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:886)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\n at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:886)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\n at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:886)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:201)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:879)\n at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)\n at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)\n at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:154)\n at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:55)\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:358)\n ... 11 more\n```\n", "comments": [ { "body": "@johtani Could you take a look at this please?\n", "created_at": "2016-09-14T16:03:08Z" }, { "body": "Hi @mbarker \n\nBefore 2.4, we didn't validate any template on index template creation.\nWe added validation on index template creation in [this PR](https://github.com/elastic/elasticsearch/pull/8802).\nIn this PR, we validate each template as a complete mapping and settings,\nwe don't check combination because we cannot check all combination when template create. \nNow, you should add the `max_length` filter setting each template for avoiding error.\n\n@clintongormley Should we support to validate any template with only `*` template?\nI think we can not predict templates combination except `*` template. \n", "created_at": "2016-09-16T02:18:22Z" }, { "body": "@johtani It seems to me like validating a template should be scoped to the context it should have available during indexing, unless I'm misunderstanding how ES supports multiple templates.\n\nWe certainly could replicate all of the analyzers/filters to all the templates where they are used, I was trying to avoid duplicating parts of the template.\n\nIf it would help I can give more details about how we are trying to compose the templates and the rationale behind it.\n", "created_at": "2016-09-16T04:54:38Z" }, { "body": "Thanks. I will think what the validation is more useful for users.\n\nValidating template on index template creation is helpful for many users because users know the error earlier than validating creating index, especially logging use-case.\nHowever in your situation, this validation is not useful because template is bigger than before 2.4.\n\ne.g. If there is skip validation flag and simulate template with index name without creating index, is it useful?\n", "created_at": "2016-09-16T05:26:01Z" }, { "body": "There is something else going on here. I (like @mbarker ) didn't understand why this template succeeded:\n\n```\nPOST /_template/analyzer-cs\n{\n \"template\": \"*-cs\",\n \"order\": 30,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"default\": {\n \"type\": \"czech\",\n \"filter\": [ \"max_length\" ]\n }\n }\n }\n }\n}\n```\n\nwhile this template didn't:\n\n```\nPOST /_template/analyzer-en\n{\n \"template\": \"*-en\",\n \"order\": 30,\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"default\": {\n \"type\": \"custom\",\n \"tokenizer\": \"standard\",\n \"filter\": [\n \"max_length\"\n ]\n }\n }\n }\n }\n}\n```\n\nThe reason is that the first analyzer definition is invalid, but is not validated:\n\n```\n \"analyzer\": {\n \"default\": {\n \"type\": \"czech\",\n \"filter\": [ \"max_length\" ]\n }\n }\n```\n\nYou're using the `czech` analyzer and passing in a parameter `filter`, which the czech analyzer doesn't recognise but silently ignores. We should add validation to analyzers to complain about any left over parameters.\n", "created_at": "2016-09-19T08:52:27Z" }, { "body": "> If there is skip validation flag and simulate template with index name without creating index, is it useful?\n\nI like the idea of a skip evaluation flag. Not sure the simulate bit is needed - easy to test by just creating an index.\n", "created_at": "2016-09-19T08:54:04Z" }, { "body": "Or elasticsearch can create template always and if a template has some errors elasticsearch only return warning message. what do you think? @clintongormley \n", "created_at": "2016-09-21T03:41:36Z" }, { "body": "I'm wondering how hard it would be to figure the parent templates of a given template (because the wildcard is more general) and apply them as well at validation time.\n", "created_at": "2016-09-21T07:30:50Z" }, { "body": "You could always make up name for the index to validate based on the template matching, ie replace `*` with `test` and resolve dependencies that way.\n\nThe only caveat would be users have to upload templates in dependency order.\n", "created_at": "2016-09-21T13:21:44Z" }, { "body": "@jpountz @mbarker Thanks for your comment. Ah, you are right. I try to fix it.\n", "created_at": "2016-09-21T14:02:07Z" }, { "body": "If templates are validated using their dependencies when creating or updating, then what will happen when a template's dependency is deleted or updated? For example template A depends on B:\n1. template B created\n2. template A created\n3. template B deleted\n4. ?\n\nHeres another thought: what if a template is invalid by itself, but the index is created with explicit settings that the template depends on?\n\nAnother possible issue: What if a template overrides something in a another template that results in a final mapping that is invalid?\n", "created_at": "2016-09-21T18:15:29Z" }, { "body": "@qwerty4030 Thanks for your comment!\n\nIn your 1st case, I think it is OK that elasticsearch does not validate at template deletion time.\nWe get the error only when user creates index using template B.\n\nIn your 3rd case, we can validate overriding only the parent templates.\n\nI' not sure your 2nd case. Could you explain any examples?\n", "created_at": "2016-09-22T08:33:00Z" }, { "body": "Just tried this in 2.3.2: \n1. create template that is invalid by itself (references `custom_analyzer` that is not defined).\n2. attempt to create index that matches this template (fails with `mapper_parsing_exception` as expected).\n3. attempt to create index that matches this template **and** specifies the `custom_analyzer` in the `settings` (this works as expected).\n\nTo reproduce:\n\n```\nPUT /_template/test-template\n{\n \"template\": \"data-test-*\",\n \"order\": 0,\n \"mappings\": {\n \"test-type\": {\n \"properties\": {\n \"field\": {\n \"type\": \"string\",\n \"analyzer\": \"custom_analyzer\"\n }\n }\n }\n }\n}\n\nPUT data-test-1\n\nPUT data-test-1\n{\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"custom_analyzer\": {\n \"type\": \"custom\",\n \"tokenizer\": \"standard\",\n \"filter\": [\n \"lowercase\"\n ]\n }\n }\n }\n }\n}\n```\n", "created_at": "2016-09-22T20:38:25Z" }, { "body": "Thanks. I think Elasticsearch should return the error in your cases, because that is tricky settings.\n", "created_at": "2016-10-02T13:12:06Z" }, { "body": "As I explained in #20919, this is a design error: templates should be validated only as much as any _incomplete_ object can be validated. Elasticsearch shoud reject a template only if it contains plain inconsistencies or grammar violations.\n\nOn top of that, **this is a breaking change** which you failed to document!\n\nI have a patch that fixes the too hasty conclusion that a referred analyzer should have been declared in the same template. I consider it elegant enough for production, but I didn't bother with updating the tests.\nIf anyone is interested, I'm willing to publish it.\n", "created_at": "2016-10-14T15:21:48Z" }, { "body": "The code is in [this branch](https://github.com/acarstoiu/elasticsearch/tree/wrong-template-validation). GitHub doesn't let me to submit a pull request against a tag, neither from a particular commit. :-1: \n", "created_at": "2016-10-14T17:08:06Z" }, { "body": "> I'm wondering how hard it would be to figure the parent templates of a given template (because the wildcard is more general) and apply them as well at validation time.\n\nIt has occurred to me that we can never do this absolutely correctly. eg a template for `logs-*` may coincide with a template for `*-2016-*` or it may not. We can't tell without an index name.\n\nI'm wondering if, instead of trying to be clever, we should just allow a flag to disable template validation and template PUT time.\n\n> The code is in this branch. GitHub doesn't let me to submit a pull request against a tag, neither from a particular commit. 👎\n\n@acarstoiu there's a good reason for that... we're not planning on making a release from a patch applied to a tag. \n", "created_at": "2016-10-17T08:42:48Z" }, { "body": "@clintongormley let's try to focus on the matter, as I'm not interested in your SCM rules and I'm sure you can cherry-pick the fixing commit anytime you want. I merely justified the lack of a proper pull request.\n\nI do not expect the commit to be accepted by itself, simply because it lacks the tests counterpart. I also did my best to preserve the coding style and the existing behaviour, but might not fit your taste. Other than that, it is a _good starting point_, we use it in production.\n\nOn the other hand, what you should do _urgently_ is to document this **truely breaking change**. Thank you.\n", "created_at": "2016-10-17T15:42:07Z" }, { "body": "@acarstoiu you don't seem to get how the open source world works. Frankly, your attitude makes me disinclined to converse with you further.\n", "created_at": "2016-10-17T17:34:00Z" }, { "body": "@clintongormley you do that, I agree.\nAs for the open source world, I already did two things: served this project a patch and urged repeatedly its team to publish this **breaking change** [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes.html) for the benefit of _other users_.\n\nAnd by the way, you can find in there lots of deprecations under the \"breaking change\" label, but when a true one is discovered, you keep it hidden.\n", "created_at": "2016-10-20T14:18:11Z" }, { "body": "@clintongormley I for one agree with @acarstoiu and frankly, your dismissal of this _extremely inconvenient **non-documented** breaking change that has completely affected our cluster and must now be downgraded across the board_ is very concerning in the least. You are literally costing us time and money. \n\n> you don't seem to get how the open source world works. \n\nYou don't seem to understand how businesses work and seem to have very little regard as to the users of your product when you inexplicably hide information and then dismiss users that hit the issue because you don't like their attitude when they're justifiably irritated?\n\nIsn't one of the tenets of open source software transparency? If so, you've missed the mark on this one.\n", "created_at": "2016-11-01T23:36:44Z" }, { "body": "@clintongormley as mentioned in https://github.com/elastic/elasticsearch/issues/21105#issuecomment-263380122 I think this should be noted in the 2.4 migration docs as a breaking change. Took us a while to track down this issue while attempting to migrate our 2.3.x cluster.", "created_at": "2016-11-28T20:12:24Z" }, { "body": "@marshall007 Yes I agree - I've just been rather put off working on this issue by the comments of others. If you'd like to submit a PR adding the breaking changes docs, I'd be happy to merge it.", "created_at": "2016-11-29T12:20:02Z" }, { "body": "@clintongormley sure thing! I have it up in PR #21869.", "created_at": "2016-11-29T19:32:24Z" }, { "body": "> @clintongormley sure thing! I have it up in PR #21869.\r\n\r\nthanks @marshall007 - merged", "created_at": "2016-11-30T09:40:20Z" }, { "body": "@clintongormley @marshall007 is that PR supposed to update [this page](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/breaking-changes-2.4.html)? \r\nI'm getting a 404 for that URL. However 2.3 breaking changes URL works fine: https://www.elastic.co/guide/en/elasticsearch/reference/2.4/breaking-changes-2.3.html\r\n\r\nThanks!", "created_at": "2016-12-05T22:49:04Z" }, { "body": "Thanks @qwerty4030 - that page wasn't being included in the docs. Should be fixed now: https://www.elastic.co/guide/en/elasticsearch/reference/2.4/breaking-changes-2.4.html", "created_at": "2016-12-07T15:48:51Z" }, { "body": "@clintongormley that page still appears to be out of date, FYI. Looks like it hasn't been updated since https://github.com/elastic/elasticsearch/commit/537f4e1932b4d516cec8dcab1c942d3f31177dac.", "created_at": "2016-12-07T18:12:26Z" }, { "body": "@marshall007 it's here: https://www.elastic.co/guide/en/elasticsearch/reference/2.4/_index_templates.html", "created_at": "2016-12-12T15:03:07Z" }, { "body": "Closing in favour of #21105", "created_at": "2017-01-12T10:32:51Z" } ], "number": 20479, "title": "Custom analyzer filter scope " }
{ "body": "Apply parent templates at creation time\nAdd some testcases\n\nCloses #20479\n", "number": 20630, "review_comments": [], "title": "Validate multiple templates at template creation time" }
{ "commits": [ { "message": "Validate multiple templates at template creation time\n\nApply parent templates at creation time\nAdd some testcases\n\nCloses #20479" }, { "message": "Validate multiple templates at template creation time\n\nAdd explanation in documentation\n\nCloses #20479" } ], "files": [ { "diff": "@@ -225,7 +225,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n \n // we only find a template when its an API call (a new index)\n // find templates, highest order are better matching\n- List<IndexTemplateMetaData> templates = findTemplates(request, currentState);\n+ List<IndexTemplateMetaData> templates = findTemplates(request.index(), currentState);\n \n Map<String, Custom> customs = new HashMap<>();\n \n@@ -452,11 +452,11 @@ public void onFailure(String source, Exception e) {\n });\n }\n \n- private List<IndexTemplateMetaData> findTemplates(CreateIndexClusterStateUpdateRequest request, ClusterState state) throws IOException {\n- List<IndexTemplateMetaData> templates = new ArrayList<>();\n+ ArrayList<IndexTemplateMetaData> findTemplates(String indexName, ClusterState state) throws IOException {\n+ ArrayList<IndexTemplateMetaData> templates = new ArrayList<>();\n for (ObjectCursor<IndexTemplateMetaData> cursor : state.metaData().templates().values()) {\n IndexTemplateMetaData template = cursor.value;\n- if (Regex.simpleMatch(template.template(), request.index())) {\n+ if (Regex.simpleMatch(template.template(), indexName)) {\n templates.add(template);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -19,7 +19,8 @@\n package org.elasticsearch.cluster.metadata;\n \n import com.carrotsearch.hppc.cursors.ObjectCursor;\n-\n+import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n+import org.apache.lucene.util.CollectionUtil;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.support.master.MasterNodeRequest;\n@@ -31,11 +32,13 @@\n import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.ValidationException;\n import org.elasticsearch.common.component.AbstractComponent;\n+import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.IndexScopedSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.MapperParsingException;\n@@ -47,6 +50,7 @@\n \n import java.util.ArrayList;\n import java.util.Collections;\n+import java.util.Comparator;\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.List;\n@@ -164,8 +168,6 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n throw new IndexTemplateAlreadyExistsException(request.name);\n }\n \n- validateAndAddTemplate(request, templateBuilder, indicesService);\n-\n for (Alias alias : request.aliases) {\n AliasMetaData aliasMetaData = AliasMetaData.builder(alias.name()).filter(alias.filter())\n .indexRouting(alias.indexRouting()).searchRouting(alias.searchRouting()).build();\n@@ -174,7 +176,21 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n for (Map.Entry<String, IndexMetaData.Custom> entry : request.customs.entrySet()) {\n templateBuilder.putCustom(entry.getKey(), entry.getValue());\n }\n+\n+ templateBuilder.order(request.order);\n+ templateBuilder.version(request.version);\n+ templateBuilder.template(request.template);\n+ templateBuilder.settings(request.settings);\n+ for (Map.Entry<String, String> entry : request.mappings.entrySet()) {\n+ try {\n+ templateBuilder.putMapping(entry.getKey(), entry.getValue());\n+ } catch (Exception e) {\n+ throw new MapperParsingException(\"Failed to parse mapping [{}]: {}\", e, entry.getKey(), e.getMessage());\n+ }\n+ }\n+\n IndexTemplateMetaData template = templateBuilder.build();\n+ validateAndAddTemplate(request, template, indicesService, currentState);\n \n MetaData.Builder builder = MetaData.builder(currentState.metaData()).put(template);\n \n@@ -188,14 +204,42 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n });\n }\n \n- private static void validateAndAddTemplate(final PutRequest request, IndexTemplateMetaData.Builder templateBuilder,\n- IndicesService indicesService) throws Exception {\n+ private void validateAndAddTemplate(final PutRequest request, IndexTemplateMetaData currentTemplate, IndicesService indicesService,\n+ ClusterState currentState) throws Exception {\n Index createdIndex = null;\n final String temporaryIndexName = UUIDs.randomBase64UUID();\n try {\n+ // validate with only parent templates of given template\n+ ArrayList<IndexTemplateMetaData> existingTemplates = metaDataCreateIndexService.findTemplates(request.template, currentState);\n+\n+ boolean update = false;\n+ for (int i = 0; i < existingTemplates.size(); i++) {\n+ if (request.name.equals(existingTemplates.get(i).getName())) {\n+ update = true;\n+ existingTemplates.set(i, currentTemplate);\n+ }\n+ }\n+ if (update == false) {\n+ existingTemplates.add(currentTemplate);\n+ }\n+ CollectionUtil.timSort(existingTemplates, new Comparator<IndexTemplateMetaData>() {\n+ @Override\n+ public int compare(IndexTemplateMetaData o1, IndexTemplateMetaData o2) {\n+ return o2.order() - o1.order();\n+ }\n+ });\n+\n+ Settings.Builder indexSettingsBuilder = Settings.builder();\n+ for (int i = existingTemplates.size() - 1; i >= 0; i--) {\n+ if (request.name.equals(existingTemplates.get(i).getName())) {\n+ indexSettingsBuilder.put(request.settings);\n+ } else {\n+ indexSettingsBuilder.put(existingTemplates.get(i).settings());\n+ }\n+ }\n \n //create index service for parsing and validating \"mappings\"\n- Settings dummySettings = Settings.builder()\n+ Settings dummySettings = indexSettingsBuilder\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n .put(request.settings)\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n@@ -207,19 +251,16 @@ private static void validateAndAddTemplate(final PutRequest request, IndexTempla\n IndexService dummyIndexService = indicesService.createIndex(tmpIndexMetadata, Collections.emptyList());\n createdIndex = dummyIndexService.index();\n \n- templateBuilder.order(request.order);\n- templateBuilder.version(request.version);\n- templateBuilder.template(request.template);\n- templateBuilder.settings(request.settings);\n-\n Map<String, Map<String, Object>> mappingsForValidation = new HashMap<>();\n- for (Map.Entry<String, String> entry : request.mappings.entrySet()) {\n- try {\n- templateBuilder.putMapping(entry.getKey(), entry.getValue());\n- } catch (Exception e) {\n- throw new MapperParsingException(\"Failed to parse mapping [{}]: {}\", e, entry.getKey(), e.getMessage());\n+\n+ for (IndexTemplateMetaData template : existingTemplates) {\n+ for (ObjectObjectCursor<String, CompressedXContent> cursor : template.mappings()) {\n+ if (mappingsForValidation.containsKey(cursor.key)) {\n+ XContentHelper.mergeDefaults(mappingsForValidation.get(cursor.key), MapperService.parseMapping(cursor.value.string()));\n+ } else {\n+ mappingsForValidation.put(cursor.key, MapperService.parseMapping(cursor.value.string()));\n+ }\n }\n- mappingsForValidation.put(entry.getKey(), MapperService.parseMapping(entry.getValue()));\n }\n \n dummyIndexService.mapperService().merge(mappingsForValidation, false);", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse;\n+import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse;\n import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateRequestBuilder;\n@@ -29,8 +30,10 @@\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n+import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.query.QueryBuilders;\n@@ -305,7 +308,6 @@ private void testExpectActionRequestValidationException(String... names) {\n \"get template with \" + Arrays.toString(names));\n }\n \n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/pull/8802\")\n public void testBrokenMapping() throws Exception {\n // clean all templates setup by the framework.\n client().admin().indices().prepareDeleteTemplate(\"*\").get();\n@@ -669,7 +671,6 @@ public void testCombineTemplates() throws Exception{\n GetIndexTemplatesResponse response = client().admin().indices().prepareGetTemplates().get();\n assertThat(response.getIndexTemplates(), empty());\n \n- //Now, a complete mapping with two separated templates is error\n // base template\n client().admin().indices().preparePutTemplate(\"template_1\")\n .setTemplate(\"*\")\n@@ -688,20 +689,192 @@ public void testCombineTemplates() throws Exception{\n .get();\n \n // put template using custom_1 analyzer\n+ XContentBuilder mappingContentBuilder =\n+ XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n+ .startObject(\"field2\").field(\"type\", \"text\").field(\"analyzer\", \"custom_1\").endObject()\n+ .endObject().endObject().endObject();\n+ client().admin().indices().preparePutTemplate(\"template_2\")\n+ .setTemplate(\"test*\")\n+ .setCreate(true)\n+ .setOrder(1)\n+ .addMapping(\"type1\", mappingContentBuilder)\n+ .get();\n+ String mappings = mappingContentBuilder.string();\n+\n+ response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), hasSize(2));\n+\n+ // index something into test_index, will match on both templates\n+ client().admin().indices().prepareCreate(\"test_index\").get();\n+\n+ ensureGreen();\n+ GetMappingsResponse mappingsResponse =\n+ client().admin().indices().prepareGetMappings(\"test_index\").get();\n+ MappingMetaData typeMapping = mappingsResponse.mappings().get(\"test_index\").get(\"type1\");\n+ assertThat(typeMapping.source().toString(), equalTo(mappings));\n+ }\n+\n+ public void testInvalidCombinationTemplates() throws Exception{\n+ // clean all templates setup by the framework.\n+ client().admin().indices().prepareDeleteTemplate(\"*\").get();\n+\n+ // check get all templates on an empty index.\n+ GetIndexTemplatesResponse response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), empty());\n+\n+ // base template\n+ client().admin().indices().preparePutTemplate(\"template_1\")\n+ .setTemplate(\"*\")\n+ .setSettings(\n+ \" {\\n\" +\n+ \" \\\"index\\\" : {\\n\" +\n+ \" \\\"analysis\\\" : {\\n\" +\n+ \" \\\"analyzer\\\" : {\\n\" +\n+ \" \\\"custom_1\\\" : {\\n\" +\n+ \" \\\"tokenizer\\\" : \\\"whitespace\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\")\n+ .get();\n+\n+ // put template using custom_2 analyzer that is not exist\n MapperParsingException e = expectThrows(MapperParsingException.class,\n () -> client().admin().indices().preparePutTemplate(\"template_2\")\n- .setTemplate(\"test*\")\n- .setCreate(true)\n- .setOrder(1)\n- .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n- .startObject(\"field2\").field(\"type\", \"string\").field(\"analyzer\", \"custom_1\").endObject()\n- .endObject().endObject().endObject())\n+ .setTemplate(\"test*\")\n+ .setCreate(true)\n+ .setOrder(1)\n+ .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n+ .startObject(\"field2\").field(\"type\", \"string\").field(\"analyzer\", \"custom_2\").endObject()\n+ .endObject().endObject().endObject())\n .get());\n- assertThat(e.getMessage(), containsString(\"analyzer [custom_1] not found for field [field2]\"));\n+ assertThat(e.getMessage(), containsString(\"analyzer [custom_2] not found for field [field2]\"));\n \n response = client().admin().indices().prepareGetTemplates().get();\n assertThat(response.getIndexTemplates(), hasSize(1));\n+ }\n+\n+ public void testInvalidCombinationTemplatesAfterDelete() throws Exception{\n+ // clean all templates setup by the framework.\n+ client().admin().indices().prepareDeleteTemplate(\"*\").get();\n \n+ // check get all templates on an empty index.\n+ GetIndexTemplatesResponse response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), empty());\n+\n+ // base template\n+ client().admin().indices().preparePutTemplate(\"template_1\")\n+ .setTemplate(\"*\")\n+ .setSettings(\n+ \" {\\n\" +\n+ \" \\\"index\\\" : {\\n\" +\n+ \" \\\"analysis\\\" : {\\n\" +\n+ \" \\\"analyzer\\\" : {\\n\" +\n+ \" \\\"custom_1\\\" : {\\n\" +\n+ \" \\\"tokenizer\\\" : \\\"whitespace\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\")\n+ .get();\n+\n+ // put template using custom_1 analyzer\n+ client().admin().indices().preparePutTemplate(\"template_2\")\n+ .setTemplate(\"test*\")\n+ .setCreate(true)\n+ .setOrder(1)\n+ .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n+ .startObject(\"field2\").field(\"type\", \"string\").field(\"analyzer\", \"custom_1\").endObject()\n+ .endObject().endObject().endObject())\n+ .get();\n+\n+ // delete base template\n+ client().admin().indices().prepareDeleteTemplate(\"template_1\").get();\n+\n+ response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), hasSize(1));\n+\n+ // create index with template_2\n+ MapperParsingException e = expectThrows(MapperParsingException.class,\n+ () -> client().admin().indices().prepareCreate(\"test_index\").get());\n+ assertThat(e.getMessage(), containsString(\"analyzer [custom_1] not found for field [field2]\"));\n+\n+ }\n+\n+ public void testOverrideTemplates() throws Exception{\n+ // clean all templates setup by the framework.\n+ client().admin().indices().prepareDeleteTemplate(\"*\").get();\n+\n+ // check get all templates on an empty index.\n+ GetIndexTemplatesResponse response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), empty());\n+\n+ // base template\n+ client().admin().indices().preparePutTemplate(\"template_1\")\n+ .setTemplate(\"*\")\n+ .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n+ .startObject(\"field2\").field(\"type\", \"integer\").endObject()\n+ .endObject().endObject().endObject())\n+ .get();\n+\n+ // put template override field2\n+ XContentBuilder mappingContentBuilder =\n+ XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n+ .startObject(\"field2\").field(\"type\", \"text\").endObject()\n+ .endObject().endObject().endObject();\n+ client().admin().indices().preparePutTemplate(\"template_2\")\n+ .setTemplate(\"test*\")\n+ .setCreate(true)\n+ .setOrder(1)\n+ .addMapping(\"type1\", mappingContentBuilder)\n+ .get();\n+ String mappings = mappingContentBuilder.string();\n+\n+ response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), hasSize(2));\n+\n+ // index something into test_index, will match on both templates\n+ client().admin().indices().prepareCreate(\"test_index\").get();\n+\n+ ensureGreen();\n+ GetMappingsResponse mappingsResponse =\n+ client().admin().indices().prepareGetMappings(\"test_index\").get();\n+ MappingMetaData typeMapping = mappingsResponse.mappings().get(\"test_index\").get(\"type1\");\n+ assertThat(typeMapping.source().toString(), equalTo(mappings));\n+ }\n+\n+ public void testCombineTemplatesDifferentTypeAndSameField() throws Exception{\n+ // clean all templates setup by the framework.\n+ client().admin().indices().prepareDeleteTemplate(\"*\").get();\n+\n+ // check get all templates on an empty index.\n+ GetIndexTemplatesResponse response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), empty());\n+\n+ // base template\n+ client().admin().indices().preparePutTemplate(\"template_1\")\n+ .setTemplate(\"*\")\n+ .addMapping(\"type2\", XContentFactory.jsonBuilder().startObject().startObject(\"type2\").startObject(\"properties\")\n+ .startObject(\"field2\").field(\"type\", \"integer\").endObject()\n+ .endObject().endObject().endObject())\n+ .get();\n+\n+ // put template same field in different type\n+ MapperParsingException e = expectThrows(MapperParsingException.class,\n+ () -> client().admin().indices().preparePutTemplate(\"template_2\")\n+ .setTemplate(\"test*\")\n+ .setCreate(true)\n+ .setOrder(1)\n+ .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n+ .startObject(\"field2\").field(\"type\", \"text\").endObject()\n+ .endObject().endObject().endObject())\n+ .get());\n+ assertThat(e.getMessage(), containsString(\"parse mapping [type1]: mapper [field2] cannot be changed from type [integer] to [text]\"));\n+\n+ response = client().admin().indices().prepareGetTemplates().get();\n+ assertThat(response.getIndexTemplates(), hasSize(1));\n }\n \n public void testOrderAndVersion() {", "filename": "core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java", "status": "modified" }, { "diff": "@@ -181,3 +181,13 @@ for indices of that start with `te*`, source will still be enabled.\n Note, for mappings, the merging is \"deep\", meaning that specific\n object/property based mappings can easily be added/overridden on higher\n order templates, with lower order templates providing the basis.\n+\n+=== Template validation\n+\n+After 5.0, Elasticsearch validate an index template during creation.\n+If an index template has some error, Elasticsearch doesn't create and user gets the error message.\n+If multiple index templates match the template pattern that you put,\n+Elasticsearch validates the final configuration that merged matched templates.\n+This means, you have multiple templates, you should create templates in dependency order.\n+And a template validation is only applied at creation and updates time.\n+If a dependent template is deleted, there is no error message and user only knows the error when user creates an index.", "filename": "docs/reference/indices/templates.asciidoc", "status": "modified" } ] }
{ "body": "This commit changes the default behavior of `_flush` to block if other flushes are ongoing.\nThis also removes the use of `FlushNotAllowedException` and instead simply return immediately\nby skipping the flush. Users should be aware if they set this option that the flush might or might\nnot flush everything to disk ie. no transactional behavior of some sort.\n\nCloses #20569\n", "comments": [ { "body": "LGTM. Thx @s1monw !\n", "created_at": "2016-09-21T09:36:15Z" }, { "body": "FYI - I will open a sep issue for the migration guide since this goes in to master and we only need that entry on 5.x and 5.0\n", "created_at": "2016-09-21T09:52:26Z" } ], "number": 20597, "title": "`_flush` should block by default" }
{ "body": "This change adds a note to the migration guide for the change of the\ndefault value from `false` to `true`.\n\nRelates to #20597\n", "number": 20603, "review_comments": [], "title": "Add migration guide note for `_flush?wait_if_ongoing`" }
{ "commits": [ { "message": "Add migration guide note for `_flush?wait_if_ongoing`\n\nThis change adds a note to the migration guide for the change of the\ndefault value from `false` to `true`.\n\nRelates to #20597" }, { "message": "Move flush change to index-apis.asciidoc" } ], "files": [ { "diff": "@@ -69,3 +69,10 @@ prefer there to be only one (obvious) way to do things like this.\n \n As of 5.0 indexing a document with `op_type=create` without specifying an ID is not\n supported anymore.\n+\n+==== Flush API\n+\n+The `wait_if_ongoing` flag default has changed to `true` causing `_flush` calls to wait and block\n+if another flush operation is currently running on the same shard. In turn, if `wait_if_ongoing` is set to\n+`false` and another flush operation is already running the flush is skipped and the shards flush call will return\n+immediately without any error. In previous versions `flush_not_allowed` exceptions where reported for each skipped shard.\n\\ No newline at end of file", "filename": "docs/reference/migration/migrate_5_0/index-apis.asciidoc", "status": "modified" } ] }
{ "body": "Today we use TransportReplicationAction to flush a shard. Yet, this is problematic\nsince a shard might throw an exception that is just fine to catch and continue and should never\nresult in failing a shard. If a fatal exception happens that should fail the shard this happen on\nthe engine level. In contrast to write actions we should never fail a shards in such a situation.\n\nThis is a blocker for me since `_flush` should never fail a shard unless the engine decides to. \n", "comments": [ { "body": "apparently this is also in 2.x \n", "created_at": "2016-09-19T20:01:15Z" }, { "body": "There is a couple of things at work here.\n\nMost importantly, a flush request on a shard now may throw a `FlushNotAllowedEngineException` if the shard is already flushing. This will cause unneeded shard failing as the primary will fail the replica (as is the standard logic for TRA).\n\nSecond, the reason flush is a TRA is because we have seen it in the past as a \"better refresh\" and moved it to TRA when we moved refresh to inherit from TRA. Refresh needs to be part of the TRA logic in order to honor the sequence `write, refresh, search` and guarantee that the docs in the write are returned.\n\nThe logic in TRA is such that any operation that failed on a replica, doesn't matter why, means that the replica is not a good copy of the current state of the replication group. Normally this means it misses some documents but, with the current hard semantics of refresh, failing to refresh quanitfies to the same thing.\n\nIMO we can choose to do any of the following:\n\n1) Reduce the chance of flush hitting exceptions on replicas by:\n 1a) catching the `FlushNotAllowedEngineException` in `TransportFlushAction` and ignoring it (we should change the docs of the flush action here)\n 1b) remove the `wait_if_ongoing` flag from FlushRequest and make it always wait.\n2) Change the semantics of `flush` to not be a superset of refresh and either move it from TRA or relax TRA's requirements for it.\n3) If needed, relax the requirement of `refresh` & `flush`.\n\nI'm in favor of 1b for 5.0. 2.x is better served with 1a .\n", "created_at": "2016-09-19T20:23:15Z" }, { "body": "I don't understand what guarantees you are talking about. The `_flush` doesn't have any visibility guarantees. It's only flush. We just don't need to fail the replica on flush? We can then just let exception reporting happen. I am also leaning towards always block but I think failing a replica on flush it the engines job and should not happen on such a high level\n", "created_at": "2016-09-19T20:28:39Z" }, { "body": "> Change the semantics of flush to not be a superset of refresh and either move it from TRA or relax TRA's requirements for it.\n\nwhy is flush a superset of refresh? can you explain?\n", "created_at": "2016-09-20T07:36:34Z" }, { "body": "> why is flush a superset of refresh? can you explain?\n\nThat's a good question. If I remember correctly, the argument was that if you see flush as \"take everything you have and make lucene segments out of it\" it also implies making it visible for searchers. If see it as just \"persist data and trim translog\" it doesn't have to imply refresh imo. I'm good with both.\n", "created_at": "2016-09-20T07:43:44Z" }, { "body": "> it also implies making it visible for searchers. \n\nthis is not true, you have to call `_refresh` to make it visible. \n\nRefresh:\n- write all segments to disk\n- reopen the lucene index reader\n- swap the new reader in\n\nFlush:\n- write all segments to disk\n- fsync them\n\nbut even if we'd be a super set I'd argue that if a refresh fails it's the engines job to decide if the exception was fatal (btw. any exception in InternalEngine#refresh()) is fatal. So in TRA the only think we should do is either report to the user or retry (we know exactly when we can retry) the shard failure is the engines job. The only exception here is `TransportWriteAction` since it can make assumptions of consistency.\n", "created_at": "2016-09-20T07:48:28Z" } ], "number": 20569, "title": "`_flush` fails the replica shards in the case of an uncaught exception" }
{ "body": "This commit changes the default behavior of `_flush` to block if other flushes are ongoing.\nThis also removes the use of `FlushNotAllowedException` and instead simply return immediately\nby skipping the flush. Users should be aware if they set this option that the flush might or might\nnot flush everything to disk ie. no transactional behavior of some sort.\n\nCloses #20569\n", "number": 20597, "review_comments": [], "title": "`_flush` should block by default" }
{ "commits": [ { "message": "`_flush` should block by default\n\nThis commit changes the default behavior of `_flush` to block if other flushes are ongoing.\nThis also removes the use of `FlushNotAllowedException` and instead simply return immediately\nby skipping the flush. Users should be aware if they set this option that the flush might or might\nnot flush everything to disk ie. no transactional behavior of some sort.\n\nCloses #20569" } ], "files": [ { "diff": "@@ -633,8 +633,7 @@ enum ElasticsearchExceptionHandle {\n org.elasticsearch.repositories.RepositoryMissingException::new, 107),\n DOCUMENT_SOURCE_MISSING_EXCEPTION(org.elasticsearch.index.engine.DocumentSourceMissingException.class,\n org.elasticsearch.index.engine.DocumentSourceMissingException::new, 109),\n- FLUSH_NOT_ALLOWED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.FlushNotAllowedEngineException.class,\n- org.elasticsearch.index.engine.FlushNotAllowedEngineException::new, 110),\n+ // 110 used to be FlushNotAllowedEngineException\n NO_CLASS_SETTINGS_EXCEPTION(org.elasticsearch.common.settings.NoClassSettingsException.class,\n org.elasticsearch.common.settings.NoClassSettingsException::new, 111),\n BIND_TRANSPORT_EXCEPTION(org.elasticsearch.transport.BindTransportException.class,", "filename": "core/src/main/java/org/elasticsearch/ElasticsearchException.java", "status": "modified" }, { "diff": "@@ -40,7 +40,7 @@\n public class FlushRequest extends BroadcastRequest<FlushRequest> {\n \n private boolean force = false;\n- private boolean waitIfOngoing = false;\n+ private boolean waitIfOngoing = true;\n \n /**\n * Constructs a new flush request against one or more indices. If nothing is provided, all indices will\n@@ -61,6 +61,7 @@ public boolean waitIfOngoing() {\n /**\n * if set to <tt>true</tt> the flush will block\n * if a another flush operation is already running until the flush can be performed.\n+ * The default is <code>true</code>\n */\n public FlushRequest waitIfOngoing(boolean waitIfOngoing) {\n this.waitIfOngoing = waitIfOngoing;", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/FlushRequest.java", "status": "modified" }, { "diff": "@@ -1105,8 +1105,6 @@ public void flushAndClose() throws IOException {\n logger.debug(\"flushing shard on close - this might take some time to sync files to disk\");\n try {\n flush(); // TODO we might force a flush in the future since we have the write lock already even though recoveries are running.\n- } catch (FlushNotAllowedEngineException ex) {\n- logger.debug(\"flush not allowed during flushAndClose - skipping\");\n } catch (EngineClosedException ex) {\n logger.debug(\"engine already closed - skipping flushAndClose\");\n }\n@@ -1233,4 +1231,11 @@ public interface Warmer {\n * This operation will close the engine if the recovery fails.\n */\n public abstract Engine recoverFromTranslog() throws IOException;\n+\n+ /**\n+ * Returns <code>true</code> iff this engine is currently recovering from translog.\n+ */\n+ public boolean isRecovering() {\n+ return false;\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/engine/Engine.java", "status": "modified" }, { "diff": "@@ -116,7 +116,7 @@ public class InternalEngine extends Engine {\n // incoming indexing ops to a single thread:\n private final AtomicInteger throttleRequestCount = new AtomicInteger();\n private final EngineConfig.OpenMode openMode;\n- private final AtomicBoolean allowCommits = new AtomicBoolean(true);\n+ private final AtomicBoolean pendingTranslogRecovery = new AtomicBoolean(false);\n private final AtomicLong maxUnsafeAutoIdTimestamp = new AtomicLong(-1);\n private final CounterMetric numVersionLookups = new CounterMetric();\n private final CounterMetric numIndexVersionsLookups = new CounterMetric();\n@@ -163,8 +163,9 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException {\n manager = createSearcherManager();\n this.searcherManager = manager;\n this.versionMap.setManager(searcherManager);\n+ assert pendingTranslogRecovery.get() == false : \"translog recovery can't be pending before we set it\";\n // don't allow commits until we are done with recovering\n- allowCommits.compareAndSet(true, openMode != EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG);\n+ pendingTranslogRecovery.set(openMode == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG);\n if (engineConfig.getRefreshListeners() != null) {\n searcherManager.addListener(engineConfig.getRefreshListeners());\n }\n@@ -190,14 +191,14 @@ public InternalEngine recoverFromTranslog() throws IOException {\n if (openMode != EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) {\n throw new IllegalStateException(\"Can't recover from translog with open mode: \" + openMode);\n }\n- if (allowCommits.get()) {\n+ if (pendingTranslogRecovery.get() == false) {\n throw new IllegalStateException(\"Engine has already been recovered\");\n }\n try {\n recoverFromTranslog(engineConfig.getTranslogRecoveryPerformer());\n } catch (Exception e) {\n try {\n- allowCommits.set(false); // just play safe and never allow commits on this\n+ pendingTranslogRecovery.set(true); // just play safe and never allow commits on this see #ensureCanFlush\n failEngine(\"failed to recover from translog\", e);\n } catch (Exception inner) {\n e.addSuppressed(inner);\n@@ -221,8 +222,8 @@ private void recoverFromTranslog(TranslogRecoveryPerformer handler) throws IOExc\n }\n // flush if we recovered something or if we have references to older translogs\n // note: if opsRecovered == 0 and we have older translogs it means they are corrupted or 0 length.\n- assert allowCommits.get() == false : \"commits are allowed but shouldn't\";\n- allowCommits.set(true); // we are good - now we can commit\n+ assert pendingTranslogRecovery.get(): \"translogRecovery is not pending but should be\";\n+ pendingTranslogRecovery.set(false); // we are good - now we can commit\n if (opsRecovered > 0) {\n logger.trace(\"flushing post recovery from translog. ops recovered [{}]. committed translog id [{}]. current id [{}]\",\n opsRecovered, translogGeneration == null ? null : translogGeneration.translogFileGeneration, translog.currentFileGeneration());\n@@ -765,7 +766,7 @@ public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineExcepti\n flushLock.lock();\n logger.trace(\"acquired flush lock after blocking\");\n } else {\n- throw new FlushNotAllowedEngineException(shardId, \"already flushing...\");\n+ return new CommitId(lastCommittedSegmentInfos.getId());\n }\n } else {\n logger.trace(\"acquired flush lock immediately\");\n@@ -1287,8 +1288,8 @@ private void ensureCanFlush() {\n // if we are in this stage we have to prevent flushes from this\n // engine otherwise we might loose documents if the flush succeeds\n // and the translog recover fails we we \"commit\" the translog on flush.\n- if (allowCommits.get() == false) {\n- throw new FlushNotAllowedEngineException(shardId, \"flushes are disabled - pending translog recovery\");\n+ if (pendingTranslogRecovery.get()) {\n+ throw new IllegalStateException(shardId.toString() + \" flushes are disabled - pending translog recovery\");\n }\n }\n \n@@ -1349,4 +1350,9 @@ private boolean incrementIndexVersionLookup() {\n boolean indexWriterHasDeletions() {\n return indexWriter.hasDeletions();\n }\n+\n+ @Override\n+ public boolean isRecovering() {\n+ return pendingTranslogRecovery.get();\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java", "status": "modified" }, { "diff": "@@ -730,7 +730,12 @@ public CompletionStats completionStats(String... fields) {\n public Engine.SyncedFlushResult syncFlush(String syncId, Engine.CommitId expectedCommitId) {\n verifyStartedOrRecovering();\n logger.trace(\"trying to sync flush. sync id [{}]. expected commit id [{}]]\", syncId, expectedCommitId);\n- return getEngine().syncFlush(syncId, expectedCommitId);\n+ Engine engine = getEngine();\n+ if (engine.isRecovering()) {\n+ throw new IllegalIndexShardStateException(shardId(), state, \"syncFlush is only allowed if the engine is not recovery\" +\n+ \" from translog\");\n+ }\n+ return engine.syncFlush(syncId, expectedCommitId);\n }\n \n public Engine.CommitId flush(FlushRequest request) throws ElasticsearchException {\n@@ -741,11 +746,16 @@ public Engine.CommitId flush(FlushRequest request) throws ElasticsearchException\n }\n // we allows flush while recovering, since we allow for operations to happen\n // while recovering, and we want to keep the translog at bay (up to deletes, which\n- // we don't gc).\n+ // we don't gc). Yet, we don't use flush internally to clear deletes and flush the indexwriter since\n+ // we use #writeIndexingBuffer for this now.\n verifyStartedOrRecovering();\n-\n+ Engine engine = getEngine();\n+ if (engine.isRecovering()) {\n+ throw new IllegalIndexShardStateException(shardId(), state, \"flush is only allowed if the engine is not recovery\" +\n+ \" from translog\");\n+ }\n long time = System.nanoTime();\n- Engine.CommitId commitId = getEngine().flush(force, waitIfOngoing);\n+ Engine.CommitId commitId = engine.flush(force, waitIfOngoing);\n flushMetric.inc(System.nanoTime() - time);\n return commitId;\n \n@@ -1165,7 +1175,11 @@ public void checkIdle(long inactiveTimeNS) {\n boolean wasActive = active.getAndSet(false);\n if (wasActive) {\n logger.debug(\"shard is now inactive\");\n- indexEventListener.onShardInactive(this);\n+ try {\n+ indexEventListener.onShardInactive(this);\n+ } catch (Exception e) {\n+ logger.warn(\"failed to notify index event listener\", e);\n+ }\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -31,7 +31,6 @@\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.engine.EngineClosedException;\n-import org.elasticsearch.index.engine.FlushNotAllowedEngineException;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.IndexingOperationListener;\n@@ -52,7 +51,7 @@\n public class IndexingMemoryController extends AbstractComponent implements IndexingOperationListener, Closeable {\n \n /** How much heap (% or bytes) we will share across all actively indexing shards on this node (default: 10%). */\n- public static final Setting<ByteSizeValue> INDEX_BUFFER_SIZE_SETTING = \n+ public static final Setting<ByteSizeValue> INDEX_BUFFER_SIZE_SETTING =\n Setting.memorySizeSetting(\"indices.memory.index_buffer_size\", \"10%\", Property.NodeScope);\n \n /** Only applies when <code>indices.memory.index_buffer_size</code> is a %, to set a floor on the actual size in bytes (default: 48 MB). */\n@@ -386,7 +385,7 @@ private void runUnlocked() {\n protected void checkIdle(IndexShard shard, long inactiveTimeNS) {\n try {\n shard.checkIdle(inactiveTimeNS);\n- } catch (EngineClosedException | FlushNotAllowedEngineException e) {\n+ } catch (EngineClosedException e) {\n logger.trace(\"ignore exception while checking if shard {} is inactive\", e, shard.shardId());\n }\n }", "filename": "core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java", "status": "modified" }, { "diff": "@@ -757,7 +757,7 @@ public void testIds() {\n ids.put(107, org.elasticsearch.repositories.RepositoryMissingException.class);\n ids.put(108, null);\n ids.put(109, org.elasticsearch.index.engine.DocumentSourceMissingException.class);\n- ids.put(110, org.elasticsearch.index.engine.FlushNotAllowedEngineException.class);\n+ ids.put(110, null); // FlushNotAllowedEngineException was removed in 5.0\n ids.put(111, org.elasticsearch.common.settings.NoClassSettingsException.class);\n ids.put(112, org.elasticsearch.transport.BindTransportException.class);\n ids.put(113, org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException.class);", "filename": "core/src/test/java/org/elasticsearch/ExceptionSerializationTests.java", "status": "modified" }, { "diff": "@@ -49,7 +49,7 @@ public void testFlushWithBlocks() {\n for (String blockSetting : Arrays.asList(SETTING_BLOCKS_READ, SETTING_BLOCKS_WRITE)) {\n try {\n enableIndexBlock(\"test\", blockSetting);\n- FlushResponse response = client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(true).execute().actionGet();\n+ FlushResponse response = client().admin().indices().prepareFlush(\"test\").execute().actionGet();\n assertNoFailures(response);\n assertThat(response.getSuccessfulShards(), equalTo(numShards.totalNumShards));\n } finally {\n@@ -80,4 +80,4 @@ public void testFlushWithBlocks() {\n setClusterReadOnly(false);\n }\n }\n-}\n\\ No newline at end of file\n+}", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/flush/FlushBlocksIT.java", "status": "modified" }, { "diff": "@@ -54,7 +54,7 @@ public void setupIndex() {\n String id = Integer.toString(j);\n client().prepareIndex(\"test\", \"type1\", id).setSource(\"text\", \"sometext\").get();\n }\n- client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(true).get();\n+ client().admin().indices().prepareFlush(\"test\").get();\n }\n \n public void testBasic() {", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentsRequestTests.java", "status": "modified" }, { "diff": "@@ -213,7 +213,7 @@ private void indexRandomData(String index) throws ExecutionException, Interrupte\n builders[i] = client().prepareIndex(index, \"type\").setSource(\"field\", \"value\");\n }\n indexRandom(true, builders);\n- client().admin().indices().prepareFlush().setForce(true).setWaitIfOngoing(true).execute().actionGet();\n+ client().admin().indices().prepareFlush().setForce(true).execute().actionGet();\n }\n \n private static final class IndexNodePredicate implements Predicate<Settings> {", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoreRequestIT.java", "status": "modified" }, { "diff": "@@ -417,7 +417,7 @@ public void testReusePeerRecovery() throws Exception {\n logger.info(\"Running Cluster Health\");\n ensureGreen();\n client().admin().indices().prepareForceMerge(\"test\").setMaxNumSegments(100).get(); // just wait for merges\n- client().admin().indices().prepareFlush().setWaitIfOngoing(true).setForce(true).get();\n+ client().admin().indices().prepareFlush().setForce(true).get();\n \n boolean useSyncIds = randomBoolean();\n if (useSyncIds == false) {", "filename": "core/src/test/java/org/elasticsearch/gateway/RecoveryFromGatewayIT.java", "status": "modified" }, { "diff": "@@ -80,7 +80,7 @@ public static void testCase(Settings indexSettings, Runnable restartCluster, Log\n client().admin().cluster().prepareHealth().setWaitForGreenStatus().setTimeout(\"30s\").get();\n // just wait for merges\n client().admin().indices().prepareForceMerge(\"test\").setMaxNumSegments(100).get();\n- client().admin().indices().prepareFlush().setWaitIfOngoing(true).setForce(true).get();\n+ client().admin().indices().prepareFlush().setForce(true).get();\n \n if (useSyncIds == false) {\n logger.info(\"--> disabling allocation while the cluster is shut down\");", "filename": "core/src/test/java/org/elasticsearch/gateway/ReusePeerRecoverySharedTest.java", "status": "modified" }, { "diff": "@@ -142,7 +142,7 @@ public void testRestoreToShadow() throws ExecutionException, InterruptedExceptio\n for (int i = 0; i < numDocs; i++) {\n client().prepareIndex(\"foo\", \"doc\", \"\"+i).setSource(\"foo\", \"bar\").get();\n }\n- assertNoFailures(client().admin().indices().prepareFlush().setForce(true).setWaitIfOngoing(true).execute().actionGet());\n+ assertNoFailures(client().admin().indices().prepareFlush().setForce(true).execute().actionGet());\n \n assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n .setType(\"fs\").setSettings(Settings.builder()", "filename": "core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java", "status": "modified" }, { "diff": "@@ -586,6 +586,7 @@ public IndexSearcher wrap(IndexSearcher searcher) throws EngineException {\n engine.close();\n \n engine = new InternalEngine(copy(engine.config(), EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG));\n+ assertTrue(engine.isRecovering());\n engine.recoverFromTranslog();\n Engine.Searcher searcher = wrapper.wrap(engine.acquireSearcher(\"test\"));\n assertThat(counter.get(), equalTo(2));\n@@ -594,13 +595,16 @@ public IndexSearcher wrap(IndexSearcher searcher) throws EngineException {\n }\n \n public void testFlushIsDisabledDuringTranslogRecovery() throws IOException {\n+ assertFalse(engine.isRecovering());\n ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.index(new Engine.Index(newUid(\"1\"), doc));\n engine.close();\n \n engine = new InternalEngine(copy(engine.config(), EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG));\n- expectThrows(FlushNotAllowedEngineException.class, () -> engine.flush(true, true));\n+ expectThrows(IllegalStateException.class, () -> engine.flush(true, true));\n+ assertTrue(engine.isRecovering());\n engine.recoverFromTranslog();\n+ assertFalse(engine.isRecovering());\n doc = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.index(new Engine.Index(newUid(\"2\"), doc));\n engine.flush();\n@@ -2114,6 +2118,7 @@ public void testCurrentTranslogIDisCommitted() throws IOException {\n Engine.Index firstIndexRequest = new Engine.Index(newUid(Integer.toString(0)), doc, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false);\n \n try (InternalEngine engine = new InternalEngine(copy(config, EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG))){\n+ assertFalse(engine.isRecovering());\n engine.index(firstIndexRequest);\n \n expectThrows(IllegalStateException.class, () -> engine.recoverFromTranslog());\n@@ -2126,6 +2131,7 @@ public void testCurrentTranslogIDisCommitted() throws IOException {\n {\n for (int i = 0; i < 2; i++) {\n try (InternalEngine engine = new InternalEngine(copy(config, EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG))) {\n+ assertTrue(engine.isRecovering());\n Map<String, String> userData = engine.getLastCommittedSegmentInfos().getUserData();\n if (i == 0) {\n assertEquals(\"1\", userData.get(Translog.TRANSLOG_GENERATION_KEY));", "filename": "core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java", "status": "modified" }, { "diff": "@@ -159,7 +159,7 @@ public void testCorruptFileAndRecover() throws ExecutionException, InterruptedEx\n }\n indexRandom(true, builders);\n ensureGreen();\n- assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).setWaitIfOngoing(true).execute().actionGet());\n+ assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).execute().actionGet());\n // we have to flush at least once here since we don't corrupt the translog\n SearchResponse countResponse = client().prepareSearch().setSize(0).get();\n assertHitCount(countResponse, numDocs);\n@@ -262,7 +262,7 @@ public void testCorruptPrimaryNoReplica() throws ExecutionException, Interrupted\n }\n indexRandom(true, builders);\n ensureGreen();\n- assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).setWaitIfOngoing(true).execute().actionGet());\n+ assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).execute().actionGet());\n // we have to flush at least once here since we don't corrupt the translog\n SearchResponse countResponse = client().prepareSearch().setSize(0).get();\n assertHitCount(countResponse, numDocs);\n@@ -408,7 +408,7 @@ public void testCorruptionOnNetworkLayer() throws ExecutionException, Interrupte\n }\n indexRandom(true, builders);\n ensureGreen();\n- assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).setWaitIfOngoing(true).execute().actionGet());\n+ assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).execute().actionGet());\n // we have to flush at least once here since we don't corrupt the translog\n SearchResponse countResponse = client().prepareSearch().setSize(0).get();\n assertHitCount(countResponse, numDocs);\n@@ -491,7 +491,7 @@ public void testCorruptFileThenSnapshotAndRestore() throws ExecutionException, I\n }\n indexRandom(true, builders);\n ensureGreen();\n- assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).setWaitIfOngoing(true).execute().actionGet());\n+ assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).execute().actionGet());\n // we have to flush at least once here since we don't corrupt the translog\n SearchResponse countResponse = client().prepareSearch().setSize(0).get();\n assertHitCount(countResponse, numDocs);\n@@ -546,7 +546,7 @@ public void testReplicaCorruption() throws Exception {\n }\n indexRandom(true, builders);\n ensureGreen();\n- assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).setWaitIfOngoing(true).execute().actionGet());\n+ assertAllSuccessful(client().admin().indices().prepareFlush().setForce(true).execute().actionGet());\n // we have to flush at least once here since we don't corrupt the translog\n SearchResponse countResponse = client().prepareSearch().setSize(0).get();\n assertHitCount(countResponse, numDocs);", "filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedFileIT.java", "status": "modified" }, { "diff": "@@ -62,7 +62,7 @@ public void testWaitIfOngoing() throws InterruptedException {\n final CountDownLatch latch = new CountDownLatch(10);\n final CopyOnWriteArrayList<Throwable> errors = new CopyOnWriteArrayList<>();\n for (int j = 0; j < 10; j++) {\n- client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(true).execute(new ActionListener<FlushResponse>() {\n+ client().admin().indices().prepareFlush(\"test\").execute(new ActionListener<FlushResponse>() {\n @Override\n public void onResponse(FlushResponse flushResponse) {\n try {", "filename": "core/src/test/java/org/elasticsearch/indices/flush/FlushIT.java", "status": "modified" }, { "diff": "@@ -348,7 +348,7 @@ public void testOpenCloseWithDocs() throws IOException, ExecutionException, Inte\n }\n indexRandom(true, builder);\n if (randomBoolean()) {\n- client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(true).setForce(true).execute().get();\n+ client().admin().indices().prepareFlush(\"test\").setForce(true).execute().get();\n }\n client().admin().indices().prepareClose(\"test\").execute().get();\n \n@@ -413,4 +413,4 @@ public void testOpenCloseIndexWithBlocks() {\n }\n }\n }\n-}\n\\ No newline at end of file\n+}", "filename": "core/src/test/java/org/elasticsearch/indices/state/OpenCloseIndexIT.java", "status": "modified" }, { "diff": "@@ -111,7 +111,7 @@ public void testCancelRecoveryAndResume() throws Exception {\n }\n ensureGreen();\n // ensure we have flushed segments and make them a big one via optimize\n- client().admin().indices().prepareFlush().setForce(true).setWaitIfOngoing(true).get();\n+ client().admin().indices().prepareFlush().setForce(true).get();\n client().admin().indices().prepareForceMerge().setMaxNumSegments(1).setFlush(true).get();\n \n final CountDownLatch latch = new CountDownLatch(1);", "filename": "core/src/test/java/org/elasticsearch/recovery/TruncatedRecoveryIT.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ public void testRetrieveSnapshots() throws Exception {\n String id = Integer.toString(i);\n client().prepareIndex(indexName, \"type1\", id).setSource(\"text\", \"sometext\").get();\n }\n- client().admin().indices().prepareFlush(indexName).setWaitIfOngoing(true).get();\n+ client().admin().indices().prepareFlush(indexName).get();\n \n logger.info(\"--> create first snapshot\");\n CreateSnapshotResponse createSnapshotResponse = client.admin()", "filename": "core/src/test/java/org/elasticsearch/repositories/blobstore/BlobStoreRepositoryTests.java", "status": "modified" }, { "diff": "@@ -99,7 +99,7 @@ public void testRandomDirectoryIOExceptions() throws IOException, InterruptedExc\n client().prepareIndex(\"test\", \"type\", \"init\" + i).setSource(\"test\", \"init\").get();\n }\n client().admin().indices().prepareRefresh(\"test\").execute().get();\n- client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(true).execute().get();\n+ client().admin().indices().prepareFlush(\"test\").execute().get();\n client().admin().indices().prepareClose(\"test\").execute().get();\n client().admin().indices().prepareUpdateSettings(\"test\").setSettings(Settings.builder()\n .put(MockFSDirectoryService.RANDOM_IO_EXCEPTION_RATE_SETTING.getKey(), exceptionRate)", "filename": "core/src/test/java/org/elasticsearch/search/basic/SearchWithRandomIOExceptionsIT.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.common.lucene.uid.Versions;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.VersionType;\n-import org.elasticsearch.index.engine.FlushNotAllowedEngineException;\n import org.elasticsearch.index.engine.VersionConflictEngineException;\n import org.elasticsearch.test.ESIntegTestCase;\n \n@@ -617,11 +616,7 @@ public void run() {\n }\n if (threadRandom.nextInt(100) == 7) {\n logger.trace(\"--> {}: TEST: now flush at {}\", threadID, System.nanoTime() - startTime);\n- try {\n- flush();\n- } catch (FlushNotAllowedEngineException fnaee) {\n- // OK\n- }\n+ flush();\n logger.trace(\"--> {}: TEST: flush done at {}\", threadID, System.nanoTime() - startTime);\n }\n }", "filename": "core/src/test/java/org/elasticsearch/versioning/SimpleVersioningIT.java", "status": "modified" }, { "diff": "@@ -18,7 +18,7 @@\n },\n \"wait_if_ongoing\": {\n \"type\" : \"boolean\",\n- \"description\" : \"If set to true the flush operation will block until the flush can be executed if another flush operation is already executing. The default is false and will cause an exception to be thrown on the shard level if another flush operation is already running.\"\n+ \"description\" : \"If set to true the flush operation will block until the flush can be executed if another flush operation is already executing. The default is true. If set to false the flush will be skipped iff if another flush operation is already running.\"\n },\n \"ignore_unavailable\": {\n \"type\" : \"boolean\",", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/indices.flush.json", "status": "modified" }, { "diff": "@@ -1204,7 +1204,7 @@ protected final void flushAndRefresh(String... indices) {\n */\n protected final FlushResponse flush(String... indices) {\n waitForRelocation();\n- FlushResponse actionGet = client().admin().indices().prepareFlush(indices).setWaitIfOngoing(true).execute().actionGet();\n+ FlushResponse actionGet = client().admin().indices().prepareFlush(indices).execute().actionGet();\n for (ShardOperationFailedException failure : actionGet.getShardFailures()) {\n assertThat(\"unexpected flush failure \" + failure.reason(), failure.status(), equalTo(RestStatus.SERVICE_UNAVAILABLE));\n }", "filename": "test/framework/src/main/java/org/elasticsearch/test/ESIntegTestCase.java", "status": "modified" } ] }
{ "body": "Today when a caller requests a prefix logger, we create a new one for every invocation. This should not be a problem if the caller holds on to the logger as is usually the case. Not every caller does this (e.g., `LoggerInfoStream`) and this can be problematic in the case of heavy indexing with multiple threads as these creations end up contending over a synchronized block.\n", "comments": [ { "body": "Can we make all the callers hold onto their prefix logger?\n", "created_at": "2016-09-19T21:16:36Z" } ], "number": 20570, "title": "Avoid unnecessary creation of prefix loggers in LoggerInfoStream" }
{ "body": "Today when acquiring a prefix logger for a logger info stream, we obtain\na new prefix logger per invocation. This can lead to contention on the\nmarkers lock in the constructor of PrefixLogger. Usually this is not a\nproblem (because the vast majority of callers hold on to the logger they\nobtain). Unfortunately, under heavy indexing with multiple threads, the\ncontention on the lock can be devastating. This commit modifies\nLoggerInfoStream to hold on to the loggers it obtains to avoid\ncontending over the lock there.\n\nCloses #20570\n", "number": 20571, "review_comments": [], "title": "Avoid unnecessary creation of prefix loggers" }
{ "commits": [ { "message": "Avoid unnecessary creation of prefix loggers\n\nToday when acquiring a prefix logger for a logger info stream, we obtain\na new prefix logger per invocation. This can lead to contention on the\nmarkers lock in the constructor of PrefixLogger. Usually this is not a\nproblem (because the vast majority of callers hold on to the logger they\nobtain). Unfortunately, under heavy indexing with multiple threads, the\ncontention on the lock can be devastating. This commit modifies\nLoggerInfoStream to hold on to the loggers it obtains to avoid\ncontending over the lock there." } ], "files": [ { "diff": "@@ -23,12 +23,17 @@\n import org.apache.lucene.util.InfoStream;\n import org.elasticsearch.common.logging.Loggers;\n \n+import java.util.Map;\n+import java.util.concurrent.ConcurrentHashMap;\n+\n /** An InfoStream (for Lucene's IndexWriter) that redirects\n * messages to \"lucene.iw.ifd\" and \"lucene.iw\" Logger.trace. */\n public final class LoggerInfoStream extends InfoStream {\n \n private final Logger parentLogger;\n \n+ private final Map<String, Logger> loggers = new ConcurrentHashMap<>();\n+\n public LoggerInfoStream(final Logger parentLogger) {\n this.parentLogger = parentLogger;\n }\n@@ -46,11 +51,12 @@ public boolean isEnabled(String component) {\n }\n \n private Logger getLogger(String component) {\n- return Loggers.getLogger(parentLogger, \".\" + component);\n+ return loggers.computeIfAbsent(component, c -> Loggers.getLogger(parentLogger, \".\" + c));\n }\n \n @Override\n public void close() {\n \n }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/common/lucene/LoggerInfoStream.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue. Note that whether you're filing a bug report or a\nfeature request, ensure that your submission is for an\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\nBug reports on an OS that we do not support or feature requests\nspecific to an OS that we do not support will be closed.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 2.3.3\n\n**Plugins installed**: []\n\n**JVM version**: 1.8.0_91\n\n**OS version**: Windows Service 2012 R2\n\n**Description of the problem including expected versus actual behavior**:\nWhen requesting inner hits with `from` field greater than or equal to total number of matched documents, inner hits `total` field is equal to 0. Expected to have actual number of all matched documents\n\n**Steps to reproduce**:\n\n```\nPUT http://localhost:9200/sample\n\nGET http://localhost:9200/_cluster/health/sample?wait_for_status=yellow\n\nPUT http://localhost:9200/sample/abc/_mapping\n{\"properties\":{\"id\":{\"type\":\"long\"},\"name\":{\"type\":\"string\"},\"items\":{\"type\":\"nested\",\"properties\":{\"id\":{\"type\":\"long\"},\"name2\":{\"type\":\"string\"}}}}}\n\nPUT http://localhost:9200/sample/abc/\n{\"id\":1,\"name\":\"test\",\"items\":[{\"id\":1,\"name2\":\"test1\"},{\"id\":2,\"name2\":\"test2\"},{\"id\":3,\"name2\":\"test3\"}]}\n\nPOST http://localhost:9200/sample/_refresh\n\nPOST http://localhost:9200/sample/abc/_search\n{\"query\":{\"nested\":{\"query\":{\"match_all\":{}},\"path\":\"items\",\"inner_hits\":{\"from\":2,\"size\":3}}}}\n// response:\n// hits.hits[0].innerHits.items.hits.total is equal to 3\n\nPOST http://localhost:9200/sample/abc/_search\n{\"query\":{\"nested\":{\"query\":{\"match_all\":{}},\"path\":\"items\",\"inner_hits\":{\"from\":3,\"size\":3}}}}\n// response:\n// hits.hits[0].innerHits.items.hits.total is equal to 0. expected to be 3\n\nDELETE http://localhost:9200/sample\n```\n", "comments": [ { "body": "@martijnvg could you take a look please? Also, I'm not sure that the `offset` is correct in the first query.\n", "created_at": "2016-09-15T14:03:45Z" }, { "body": "This needs to be fixed in Lucene", "created_at": "2016-12-07T13:26:25Z" }, { "body": "This still appears to be relevant.\r\ncc @elastic/es-search-aggs ", "created_at": "2018-03-13T18:07:43Z" }, { "body": "Pinging @elastic/es-search (Team:Search)", "created_at": "2023-05-03T21:40:24Z" } ], "number": 20501, "title": "Inner hits `total` value is equal to 0 if `from` field is greater than or equal to total number of matched documents" }
{ "body": "so that we at least include total hits, otherwise also total hits would be equal to zero.\n\nPR for #20501\n", "number": 20556, "review_comments": [ { "body": "This does not make sense to me: fi there are 7 inner hits and from=2 and size=1, then we should not just count?\n", "created_at": "2016-09-20T08:39:26Z" }, { "body": "doh... Yes, you're absolutely right.\n", "created_at": "2016-09-20T10:12:01Z" }, { "body": "@jpountz The reason I took this approach, was that `TopDocsCollector#topDocs(...)` returns a top docs with total hits set to 0 if `from` is larger than number of collected docs, which is unexpected. Instead it should just return a TopDocs with no docs but a real totalHits/maxScore?\n\nIn the search api we don't rely on `TopDocsCollector#topDocs(...)`, but rather on `TopDocs#merge`, which always set maxScore and totalHits.\n", "created_at": "2016-09-29T12:58:11Z" }, { "body": "> Instead it should just return a TopDocs with no docs but a real totalHits/maxScore?\n\nI tend to agree.\n", "created_at": "2016-09-29T14:28:03Z" } ], "title": "If size / offset are out of bounds just do a plain count" }
{ "commits": [ { "message": "inner_hits: If size / offset are out of bounds just do a plain count,\nso that we at least include total hits.\n\nCloses #20501" } ], "files": [ { "diff": "@@ -59,8 +59,6 @@\n import java.util.Map;\n import java.util.Objects;\n \n-/**\n- */\n public final class InnerHitsContext {\n \n private final Map<String, BaseInnerHits> innerHits;\n@@ -110,6 +108,10 @@ public InnerHitsContext innerHits() {\n public void setChildInnerHits(Map<String, InnerHitsContext.BaseInnerHits> childInnerHits) {\n this.childInnerHits = new InnerHitsContext(childInnerHits);\n }\n+\n+ boolean doCountOnly() {\n+ return from() < 0 || from() >= size() || size() <= 0;\n+ }\n }\n \n public static final class NestedInnerHits extends BaseInnerHits {\n@@ -135,7 +137,7 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n Query childFilter = childObjectMapper.nestedTypeFilter();\n Query q = Queries.filtered(query(), new NestedChildrenQuery(parentFilter, childFilter, hitContext));\n \n- if (size() == 0) {\n+ if (doCountOnly()) {\n return new TopDocs(context.searcher().count(q), Lucene.EMPTY_SCORE_DOCS, 0);\n } else {\n int topN = Math.min(from() + size(), context.searcher().getIndexReader().maxDoc());\n@@ -310,7 +312,7 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n // Only include docs that have this inner hits type\n .add(documentMapper.typeFilter(), Occur.FILTER)\n .build();\n- if (size() == 0) {\n+ if (doCountOnly()) {\n final int count = context.searcher().count(q);\n return new TopDocs(count, Lucene.EMPTY_SCORE_DOCS, 0);\n } else {", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java", "status": "modified" }, { "diff": "@@ -179,6 +179,35 @@ public void testSimpleNested() throws Exception {\n assertThat(innerHits.getAt(0).explanation().toString(), containsString(\"weight(comments.message:fox in\"));\n assertThat(innerHits.getAt(0).getFields().get(\"comments.message\").getValue().toString(), equalTo(\"eat\"));\n assertThat(innerHits.getAt(0).getFields().get(\"script\").getValue().toString(), equalTo(\"5\"));\n+\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\"), ScoreMode.Avg)\n+ .innerHit(new InnerHitBuilder().setName(\"comment\").setFrom(2).setSize(3))\n+ ).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertSearchHit(response, 1, hasId(\"2\"));\n+ assertThat(response.getHits().getAt(0).getShard(), notNullValue());\n+ assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n+ assertThat(innerHits.totalHits(), equalTo(3L));\n+ assertThat(innerHits.getHits().length, equalTo(1));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"2\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(2));\n+\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\"), ScoreMode.Avg)\n+ .innerHit(new InnerHitBuilder().setName(\"comment\").setFrom(3).setSize(3))\n+ ).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertSearchHit(response, 1, hasId(\"2\"));\n+ assertThat(response.getHits().getAt(0).getShard(), notNullValue());\n+ assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n+ assertThat(innerHits.totalHits(), equalTo(3L));\n+ assertThat(innerHits.getHits().length, equalTo(0));\n }\n \n public void testRandomNested() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/InnerHitsIT.java", "status": "modified" }, { "diff": "@@ -46,8 +46,6 @@\n \n import static org.hamcrest.Matchers.equalTo;\n \n-/**\n- */\n public class NestedChildrenFilterTests extends ESTestCase {\n public void testNestedChildrenFilter() throws Exception {\n int numParentDocs = scaledRandomIntBetween(0, 32);", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/NestedChildrenFilterTests.java", "status": "modified" } ] }
{ "body": "The following produces a stack overflow in Painless:\n\n```\nPOST t/t/1/_update\n{\n \"upsert\": {},\n \"scripted_upsert\": true,\n \"script\": {\n \"lang\": \"painless\",\n \"inline\": \"for (def key : params.keySet()) { ctx._source[key] = params[key]}\"\n }, \n \"params\": {\n \"bar\": \"two\",\n \"baz\": \"two\"\n }\n}\n```\n", "comments": [ { "body": "And related (perhaps?), `params` can't be used with this syntax: `params[var]`:\n\n```\nPOST t/t/1/_update\n{\n \"upsert\": {},\n \"scripted_upsert\": true,\n \"script\": {\n \"lang\": \"painless\",\n \"inline\": \"for (def key : ['bar','baz']) { ctx._source[key] = params[key]}\"\n }, \n \"params\": {\n \"bar\": \"two\",\n \"baz\": \"two\"\n }\n}\n```\n\nproduces:\n\n```\n \"_source\": {\n \"bar\": null,\n \"baz\": null\n }\n```\n", "created_at": "2016-07-18T13:22:19Z" }, { "body": "Just got back from vacation, so I'll take a look at this issue today.\n", "created_at": "2016-07-25T16:30:07Z" }, { "body": "@clintongormley \n\nIn the first example, the reason you are seeing stack overflow is because the result ends up being infinitely recursive. Params contains 'ctx,' 'bar,' and 'baz' (more than just the user input); not just 'bar' and 'baz.' This means that _source has 'ctx' -> '_source' -> 'ctx' -> 'source' ... forever :). This is interesting behavior (not exactly a bug), and maybe something we should exclusively prevent from happening. \n\nIn the second example, I'm not sure why those are null values, when I run the parameters as a local unit test, it seems to work well. There must be some disconnect between in coming user values somewhere along the line, and this will require some more investigation.\n", "created_at": "2016-07-25T23:39:44Z" }, { "body": "> Params contains 'ctx,' 'bar,' and 'baz' (more than just the user input); not just 'bar' and 'baz.' This means that _source has 'ctx' -> '_source' -> 'ctx' -> 'source' ... forever :).\n\nAh ok.\n\n> This is interesting behavior (not exactly a bug), and maybe something we should exclusively prevent from happening.\n\nJust so you know, it crashes the node :)\n", "created_at": "2016-07-27T16:45:39Z" }, { "body": "@clintongormley In your second example from https://github.com/elastic/elasticsearch/issues/19475#issuecomment-233326494, the `params` should be included in the `script` object, not at the same level.\n", "created_at": "2016-09-22T12:25:03Z" }, { "body": "Doh - thanks\n", "created_at": "2016-09-23T15:17:53Z" } ], "number": 19475, "title": "Stack overflow in Painless" }
{ "body": "Some objects like maps, iterables or arrays of objects can self-reference themselves. This is mostly due to a bug in code but the XContentBuilder should be able to detect such situations and throws an IllegalArgumentException instead of building objects over and over until a stackoverflow occurs.\n\ncloses #20540\ncloses #19475\n", "number": 20550, "review_comments": [ { "body": "This should probably be an IdentityHashSet.\n", "created_at": "2016-09-19T10:09:09Z" }, { "body": "could be just `if (ancestors.add(value) == false)` when moving to a set\n", "created_at": "2016-09-19T10:11:15Z" }, { "body": "why does the split help?\n", "created_at": "2016-09-19T10:12:04Z" }, { "body": "I didn't know about this one, thanks\n", "created_at": "2016-09-19T10:52:57Z" }, { "body": "Oh, it's a leftover.\n", "created_at": "2016-09-19T10:53:27Z" }, { "body": "let's use Arrays.asList?\n", "created_at": "2016-09-23T11:14:34Z" }, { "body": "let's throw an exception in the else clause since it should never happen if I read the rest of the code correctly?\n", "created_at": "2016-09-23T11:15:53Z" }, { "body": "and remove the below null check\n", "created_at": "2016-09-23T11:16:14Z" }, { "body": "actually it would be easier to create an iterable than in iterator I think?\n", "created_at": "2016-09-23T11:17:13Z" }, { "body": "otherwise I would be happy to just return in the else block and remove the instanceof check in the while loop below\n", "created_at": "2016-09-23T11:18:09Z" }, { "body": "Yes\n", "created_at": "2016-10-10T08:52:10Z" }, { "body": "Makes sense, thanks\n", "created_at": "2016-10-10T08:52:17Z" }, { "body": "can you use wildcard generics to avoid compiler warnings? (`Iterable<?>`)\n", "created_at": "2016-10-10T13:20:05Z" }, { "body": "Sure\n", "created_at": "2016-10-11T07:47:27Z" } ], "title": "XContentBuilder: Avoid building self-referencing objects" }
{ "commits": [ { "message": "XContentBuilder: Avoid building self-referencing objects\n\nSome objects like maps, iterables or arrays of objects can self-reference themselves. This is mostly due to a bug in code but the XContentBuilder should be able to detect such situations and throws an IllegalArgumentException instead of building objects over and over until a stackoverflow occurs.\n\ncloses #20540\ncloses #19475" } ], "files": [ { "diff": "@@ -38,10 +38,12 @@\n import java.io.InputStream;\n import java.io.OutputStream;\n import java.nio.file.Path;\n+import java.util.Arrays;\n import java.util.Calendar;\n import java.util.Collections;\n import java.util.Date;\n import java.util.HashMap;\n+import java.util.IdentityHashMap;\n import java.util.Locale;\n import java.util.Map;\n import java.util.Objects;\n@@ -778,6 +780,11 @@ XContentBuilder values(Object[] values) throws IOException {\n if (values == null) {\n return nullValue();\n }\n+\n+ // checks that the array of object does not contain references to itself because\n+ // iterating over entries will cause a stackoverflow error\n+ ensureNoSelfReferences(values);\n+\n startArray();\n for (Object o : values) {\n value(o);\n@@ -859,6 +866,11 @@ public XContentBuilder map(Map<String, ?> values) throws IOException {\n if (values == null) {\n return nullValue();\n }\n+\n+ // checks that the map does not contain references to itself because\n+ // iterating over map entries will cause a stackoverflow error\n+ ensureNoSelfReferences(values);\n+\n startObject();\n for (Map.Entry<String, ?> value : values.entrySet()) {\n field(value.getKey());\n@@ -881,6 +893,10 @@ private XContentBuilder value(Iterable<?> values) throws IOException {\n //treat as single value\n value((Path) values);\n } else {\n+ // checks that the iterable does not contain references to itself because\n+ // iterating over entries will cause a stackoverflow error\n+ ensureNoSelfReferences(values);\n+\n startArray();\n for (Object value : values) {\n unknownValue(value);\n@@ -1012,4 +1028,32 @@ static void ensureNotNull(Object value, String message) {\n throw new IllegalArgumentException(message);\n }\n }\n+\n+ static void ensureNoSelfReferences(Object value) {\n+ ensureNoSelfReferences(value, Collections.newSetFromMap(new IdentityHashMap<>()));\n+ }\n+\n+ private static void ensureNoSelfReferences(final Object value, final Set<Object> ancestors) {\n+ if (value != null) {\n+\n+ Iterable<?> it;\n+ if (value instanceof Map) {\n+ it = ((Map) value).values();\n+ } else if ((value instanceof Iterable) && (value instanceof Path == false)) {\n+ it = (Iterable) value;\n+ } else if (value instanceof Object[]) {\n+ it = Arrays.asList((Object[]) value);\n+ } else {\n+ return;\n+ }\n+\n+ if (ancestors.add(value) == false) {\n+ throw new IllegalArgumentException(\"Object has already been built and is self-referencing itself\");\n+ }\n+ for (Object o : it) {\n+ ensureNoSelfReferences(o, ancestors);\n+ }\n+ ancestors.remove(value);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java", "status": "modified" }, { "diff": "@@ -47,15 +47,18 @@\n import java.io.IOException;\n import java.math.BigInteger;\n import java.nio.file.Path;\n+import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Calendar;\n import java.util.Collections;\n import java.util.Date;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n import java.util.concurrent.TimeUnit;\n \n+import static java.util.Collections.emptyMap;\n import static java.util.Collections.singletonMap;\n import static org.hamcrest.Matchers.allOf;\n import static org.hamcrest.Matchers.containsString;\n@@ -846,6 +849,140 @@ public void testEnsureNotNull() {\n XContentBuilder.ensureNotNull(\"foo\", \"No exception must be thrown\");\n }\n \n+ public void testEnsureNoSelfReferences() throws IOException {\n+ XContentBuilder.ensureNoSelfReferences(emptyMap());\n+ XContentBuilder.ensureNoSelfReferences(null);\n+\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"field\", map);\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> builder().map(map));\n+ assertThat(e.getMessage(), containsString(\"Object has already been built and is self-referencing itself\"));\n+ }\n+\n+ /**\n+ * Test that the same map written multiple times do not trigger the self-reference check in\n+ * {@link XContentBuilder#ensureNoSelfReferences(Object)}\n+ */\n+ public void testRepeatedMapsAndNoSelfReferences() throws Exception {\n+ Map<String, Object> mapB = singletonMap(\"b\", \"B\");\n+ Map<String, Object> mapC = singletonMap(\"c\", \"C\");\n+ Map<String, Object> mapD = singletonMap(\"d\", \"D\");\n+ Map<String, Object> mapA = new HashMap<>();\n+ mapA.put(\"a\", 0);\n+ mapA.put(\"b1\", mapB);\n+ mapA.put(\"b2\", mapB);\n+ mapA.put(\"c\", Arrays.asList(mapC, mapC));\n+ mapA.put(\"d1\", mapD);\n+ mapA.put(\"d2\", singletonMap(\"d3\", mapD));\n+\n+ final String expected =\n+ \"{'map':{'b2':{'b':'B'},'a':0,'c':[{'c':'C'},{'c':'C'}],'d1':{'d':'D'},'d2':{'d3':{'d':'D'}},'b1':{'b':'B'}}}\";\n+\n+ assertResult(expected, () -> builder().startObject().field(\"map\", mapA).endObject());\n+ assertResult(expected, () -> builder().startObject().field(\"map\").value(mapA).endObject());\n+ assertResult(expected, () -> builder().startObject().field(\"map\").map(mapA).endObject());\n+ }\n+\n+ public void testSelfReferencingMapsOneLevel() throws IOException {\n+ Map<String, Object> map0 = new HashMap<>();\n+ Map<String, Object> map1 = new HashMap<>();\n+\n+ map0.put(\"foo\", 0);\n+ map0.put(\"map1\", map1); // map 0 -> map 1\n+\n+ map1.put(\"bar\", 1);\n+ map1.put(\"map0\", map0); // map 1 -> map 0 loop\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> builder().map(map0));\n+ assertThat(e.getMessage(), containsString(\"Object has already been built and is self-referencing itself\"));\n+ }\n+\n+ public void testSelfReferencingMapsTwoLevels() throws IOException {\n+ Map<String, Object> map0 = new HashMap<>();\n+ Map<String, Object> map1 = new HashMap<>();\n+ Map<String, Object> map2 = new HashMap<>();\n+\n+ map0.put(\"foo\", 0);\n+ map0.put(\"map1\", map1); // map 0 -> map 1\n+\n+ map1.put(\"bar\", 1);\n+ map1.put(\"map2\", map2); // map 1 -> map 2\n+\n+ map2.put(\"baz\", 2);\n+ map2.put(\"map0\", map0); // map 2 -> map 0 loop\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> builder().map(map0));\n+ assertThat(e.getMessage(), containsString(\"Object has already been built and is self-referencing itself\"));\n+ }\n+\n+ public void testSelfReferencingObjectsArray() throws IOException {\n+ Object[] values = new Object[3];\n+ values[0] = 0;\n+ values[1] = 1;\n+ values[2] = values;\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> builder()\n+ .startObject()\n+ .field(\"field\", values)\n+ .endObject());\n+ assertThat(e.getMessage(), containsString(\"Object has already been built and is self-referencing itself\"));\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> builder()\n+ .startObject()\n+ .array(\"field\", values)\n+ .endObject());\n+ assertThat(e.getMessage(), containsString(\"Object has already been built and is self-referencing itself\"));\n+ }\n+\n+ public void testSelfReferencingIterable() throws IOException {\n+ List<Object> values = new ArrayList<>();\n+ values.add(\"foo\");\n+ values.add(\"bar\");\n+ values.add(values);\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> builder()\n+ .startObject()\n+ .field(\"field\", (Iterable) values)\n+ .endObject());\n+ assertThat(e.getMessage(), containsString(\"Object has already been built and is self-referencing itself\"));\n+ }\n+\n+ public void testSelfReferencingIterableOneLevel() throws IOException {\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"foo\", 0);\n+ map.put(\"bar\", 1);\n+\n+ Iterable<Object> values = Arrays.asList(\"one\", \"two\", map);\n+ map.put(\"baz\", values);\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> builder()\n+ .startObject()\n+ .field(\"field\", (Iterable) values)\n+ .endObject());\n+ assertThat(e.getMessage(), containsString(\"Object has already been built and is self-referencing itself\"));\n+ }\n+\n+ public void testSelfReferencingIterableTwoLevels() throws IOException {\n+ Map<String, Object> map0 = new HashMap<>();\n+ Map<String, Object> map1 = new HashMap<>();\n+ Map<String, Object> map2 = new HashMap<>();\n+\n+ List<Object> it1 = new ArrayList<>();\n+\n+ map0.put(\"foo\", 0);\n+ map0.put(\"it1\", (Iterable<?>) it1); // map 0 -> it1\n+\n+ it1.add(map1);\n+ it1.add(map2); // it 1 -> map 1, map 2\n+\n+ map2.put(\"baz\", 2);\n+ map2.put(\"map0\", map0); // map 2 -> map 0 loop\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> builder().map(map0));\n+ assertThat(e.getMessage(), containsString(\"Object has already been built and is self-referencing itself\"));\n+ }\n+\n private static void expectUnclosedException(ThrowingRunnable runnable) {\n IllegalStateException e = expectThrows(IllegalStateException.class, runnable);\n assertThat(e.getMessage(), containsString(\"Failed to close the XContentBuilder\"));", "filename": "core/src/test/java/org/elasticsearch/common/xcontent/BaseXContentTestCase.java", "status": "modified" }, { "diff": "@@ -58,4 +58,30 @@\n \n - match: { _source.foo: yyy }\n - match: { _source.count: 1 }\n- \n+\n+---\n+\"Update Script with script error\":\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 2\n+ body:\n+ foo: bar\n+ count: 1\n+\n+ - do:\n+ catch: request\n+ update:\n+ index: test_1\n+ type: test\n+ id: 2\n+ body:\n+ script:\n+ lang: painless\n+ inline: \"for (def key : params.keySet()) { ctx._source[key] = params[key]}\"\n+ params: { bar: 'xxx' }\n+\n+ - match: { error.root_cause.0.type: \"remote_transport_exception\" }\n+ - match: { error.type: \"illegal_argument_exception\" }\n+ - match: { error.reason: \"Object has already been built and is self-referencing itself\" }", "filename": "modules/lang-painless/src/test/resources/rest-api-spec/test/plan_a/15_update.yaml", "status": "modified" }, { "diff": "@@ -20,7 +20,6 @@ setup:\n indices.refresh: {}\n \n ---\n-\n \"Scripted Field\":\n - do:\n search:\n@@ -34,3 +33,22 @@ setup:\n x: \"bbb\"\n \n - match: { hits.hits.0.fields.bar.0: \"aaabbb\"}\n+\n+---\n+\"Scripted Field with script error\":\n+ - do:\n+ catch: request\n+ search:\n+ body:\n+ script_fields:\n+ bar:\n+ script:\n+ lang: painless\n+ inline: \"while (true) {}\"\n+\n+ - match: { error.root_cause.0.type: \"script_exception\" }\n+ - match: { error.root_cause.0.reason: \"compile error\" }\n+ - match: { error.caused_by.type: \"script_exception\" }\n+ - match: { error.caused_by.reason: \"compile error\" }\n+ - match: { error.caused_by.caused_by.type: \"illegal_argument_exception\" }\n+ - match: { error.caused_by.caused_by.reason: \"While loop has no escape.\" }", "filename": "modules/lang-painless/src/test/resources/rest-api-spec/test/plan_a/20_scriptfield.yaml", "status": "modified" } ] }
{ "body": "Catching and suppressing `AlreadyClosedException` from Lucene is dangerous because it can mean there is a bug in ES since ES should normally guard against invoking Lucene classes after they were closed.\n\nI reviewed the cases where we catch `AlreadyClosedException` from Lucene and removed the ones that I believe are not needed, or improved comments explaining why ACE is OK in that case.\n\nI think (@s1monw can you confirm?) that holding the engine's `readLock` means IW will not be closed, except if disaster strikes (`failEngine`) at which point I think it's fine to see the original ACE in the logs?\n\nCloses #19861\n", "comments": [ { "body": "@jasontedor thank you for the feedback; I pushed new changes.\n", "created_at": "2016-08-12T15:17:20Z" }, { "body": "thanks mike for cleaning this up. We need to be very careful since EngineClosedException has a special meaning in replication that we use to retry documents so we should ensure that we suppress ACE when the engine is closed or also add ACE to the list of exceptions that trigger a retry?\n\nI also wonder about the SearcherManager contract a bit. It seems like we are closing things in the right order SM first then rollback the writer. so from my perspective hitting ACE on SM is always safe? if not that's a problem in SM or RM? I am not sure if IW protects the caller from this or if there are any concurrency issues but SM should be safe to be called at any time?\n", "created_at": "2016-08-16T19:36:13Z" }, { "body": "I unassigned myself here and on #19861: I don't think I'm qualified to improve the exception handling here.\n\nI had a chat with @s1monw about all the scary complexities; I think someone who better understands the locking / concurrency in the engine, and what the different exceptions mean to the distributed layer, etc., needs to tackle this.\n\n> I also wonder about the SearcherManager contract a bit. \n\nI'll look into this.\n", "created_at": "2016-08-16T19:54:04Z" }, { "body": "> I unassigned myself here and on #19861: I don't think I'm qualified to improve the exception handling here.\n\nI don't think so. It's something we have to solve, I can only encourage you to go and fix it. As long as we do the right thing in the indexing path I think we are fine?\n", "created_at": "2016-08-16T19:58:53Z" }, { "body": "@mikemccand I pushed new changes\n", "created_at": "2016-08-22T20:39:16Z" }, { "body": "LGTM, thanks @s1monw \n", "created_at": "2016-08-22T20:45:01Z" }, { "body": "@mikemccand I had to add another special case for forceMerge can you take another look\n", "created_at": "2016-08-23T07:27:54Z" }, { "body": "LGTM, thanks @s1monw \n", "created_at": "2016-08-23T09:55:39Z" } ], "number": 19975, "title": "Don't suppress AlreadyClosedException" }
{ "body": "Since #19975 we are aggressively failing with AssertionError when we catch an ACE\ninside the InternalEngine. We treat everything that is neither a tragic even on\nthe IndexWriter or the Translog as a bug and throw an AssertionError. Yet, if the\nengine hits an IOException on refresh of some sort and the IW doesn't realize it since\nit's not fully under it's control we fail he engine but neither IW nor Translog are marked\nas failed by tragic event while they are already closed.\nThis change takes the `failedEngine` exception into account and if it's set we know\nthat the engine failed by some other even than a tragic one and can continue.\n\nThis change also uses the `ReferenceManager#RefreshListener` interface in the engine rather\nthan it's concrete implementation.\n\nRelates to #19975\n", "number": 20546, "review_comments": [ { "body": "Should we make this a `logger.warn` instead? E.g. it could be that this new exception is the true root cause (e.g. disk full or something) whereas because of thread scheduling another (cascaded) exception (like IW's ACE) called `failEngine` first?\n", "created_at": "2016-09-19T09:10:04Z" }, { "body": "Is there a possible thread race condition here, where this thread is handling the `ACE`, but the other thread (that actually closed IW) hasn't been scheduled by the JVM yet and so it didn't get a chance to fail the engine?\n\nOh I see, this is why you fixed `failEngine` to first set `failedEngine` and THEN close the IW, good!\n", "created_at": "2016-09-19T09:26:46Z" }, { "body": "I mean we can but it will be likely a lot of logging everywhere? maybe worth it\n", "created_at": "2016-09-19T09:29:09Z" }, { "body": "yeah ;)\n", "created_at": "2016-09-19T09:29:41Z" }, { "body": "I made it a warn logging and added some comments where needed to clarify\n", "created_at": "2016-09-19T09:47:11Z" } ], "title": "Take refresh IOExceptions into account when catching ACE in InternalEngine" }
{ "commits": [ { "message": "Take refresh IOExceptions into account when catching ACE in InternalEngine\n\nSince #19975 we are aggressively failing with AssertionError when we catch an ACE\ninside the InternalEngine. We treat everything that is neither a tragic even on\nthe IndexWriter or the Translog as a bug and throw an AssertionError. Yet, if the\nengine hits an IOException on refresh of some sort and the IW doesn't realize it since\nit's not fully under it's control we fail he engine but neither IW nor Translog are marked\nas failed by tragic event while they are already closed.\nThis change takes the `failedEngine` exception into account and if it's set we know\nthat the engine failed by some other even than a tragic one and can continue.\n\nThis change also uses the `ReferenceManager#RefreshListener` interface in the engine rather\nthan it's concrete implementation.\n\nRelates to #19975" }, { "message": "add missing cut over to SetOnce" }, { "message": "add missing cut over to SetOnce" }, { "message": "don't set translog to refresh listener in shadow index shard" }, { "message": "add commments and log ignored engine failures as warn" }, { "message": "also expect EngineClosedException" } ], "files": [ { "diff": "@@ -42,6 +42,7 @@\n import org.apache.lucene.store.IOContext;\n import org.apache.lucene.util.Accountable;\n import org.apache.lucene.util.Accountables;\n+import org.apache.lucene.util.SetOnce;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -101,7 +102,7 @@ public abstract class Engine implements Closeable {\n protected final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();\n protected final ReleasableLock readLock = new ReleasableLock(rwl.readLock());\n protected final ReleasableLock writeLock = new ReleasableLock(rwl.writeLock());\n- protected volatile Exception failedEngine = null;\n+ protected final SetOnce<Exception> failedEngine = new SetOnce<>();\n /*\n * on <tt>lastWriteNanos</tt> we use System.nanoTime() to initialize this since:\n * - we use the value for figuring out if the shard / engine is active so if we startup and no write has happened yet we still consider it active\n@@ -377,7 +378,7 @@ public final Searcher acquireSearcher(String source) throws EngineException {\n \n protected void ensureOpen() {\n if (isClosed.get()) {\n- throw new EngineClosedException(shardId, failedEngine);\n+ throw new EngineClosedException(shardId, failedEngine.get());\n }\n }\n \n@@ -670,17 +671,19 @@ public void failEngine(String reason, @Nullable Exception failure) {\n if (failEngineLock.tryLock()) {\n store.incRef();\n try {\n+ if (failedEngine.get() != null) {\n+ logger.warn((Supplier<?>) () -> new ParameterizedMessage(\"tried to fail engine but engine is already failed. ignoring. [{}]\", reason), failure);\n+ return;\n+ }\n+ // this must happen before we close IW or Translog such that we can check this state to opt out of failing the engine\n+ // again on any caught AlreadyClosedException\n+ failedEngine.set((failure != null) ? failure : new IllegalStateException(reason));\n try {\n // we just go and close this engine - no way to recover\n closeNoLock(\"engine failed on: [\" + reason + \"]\");\n } finally {\n- if (failedEngine != null) {\n- logger.debug((Supplier<?>) () -> new ParameterizedMessage(\"tried to fail engine but engine is already failed. ignoring. [{}]\", reason), failure);\n- return;\n- }\n logger.warn((Supplier<?>) () -> new ParameterizedMessage(\"failed engine [{}]\", reason), failure);\n // we must set a failure exception, generate one if not supplied\n- failedEngine = (failure != null) ? failure : new IllegalStateException(reason);\n // we first mark the store as corrupted before we notify any listeners\n // this must happen first otherwise we might try to reallocate so quickly\n // on the same node that we don't see the corrupted marker file when", "filename": "core/src/main/java/org/elasticsearch/index/engine/Engine.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.apache.lucene.index.SnapshotDeletionPolicy;\n import org.apache.lucene.search.QueryCache;\n import org.apache.lucene.search.QueryCachingPolicy;\n+import org.apache.lucene.search.ReferenceManager;\n import org.apache.lucene.search.similarities.Similarity;\n import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.common.Nullable;\n@@ -34,7 +35,6 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.codec.CodecService;\n-import org.elasticsearch.index.shard.RefreshListeners;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.TranslogRecoveryPerformer;\n import org.elasticsearch.index.store.Store;\n@@ -68,7 +68,7 @@ public final class EngineConfig {\n private final QueryCachingPolicy queryCachingPolicy;\n private final long maxUnsafeAutoIdTimestamp;\n @Nullable\n- private final RefreshListeners refreshListeners;\n+ private final ReferenceManager.RefreshListener refreshListeners;\n \n /**\n * Index setting to change the low level lucene codec used for writing new segments.\n@@ -112,7 +112,7 @@ public EngineConfig(OpenMode openMode, ShardId shardId, ThreadPool threadPool,\n MergePolicy mergePolicy, Analyzer analyzer,\n Similarity similarity, CodecService codecService, Engine.EventListener eventListener,\n TranslogRecoveryPerformer translogRecoveryPerformer, QueryCache queryCache, QueryCachingPolicy queryCachingPolicy,\n- TranslogConfig translogConfig, TimeValue flushMergesAfter, RefreshListeners refreshListeners,\n+ TranslogConfig translogConfig, TimeValue flushMergesAfter, ReferenceManager.RefreshListener refreshListeners,\n long maxUnsafeAutoIdTimestamp) {\n if (openMode == null) {\n throw new IllegalArgumentException(\"openMode must not be null\");\n@@ -322,9 +322,9 @@ public enum OpenMode {\n }\n \n /**\n- * {@linkplain RefreshListeners} instance to configure.\n+ * {@linkplain ReferenceManager.RefreshListener} instance to configure.\n */\n- public RefreshListeners getRefreshListeners() {\n+ public ReferenceManager.RefreshListener getRefreshListeners() {\n return refreshListeners;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java", "status": "modified" }, { "diff": "@@ -82,9 +82,6 @@\n import java.util.concurrent.locks.ReentrantLock;\n import java.util.function.Function;\n \n-/**\n- *\n- */\n public class InternalEngine extends Engine {\n /**\n * When we last pruned expired tombstones from versionMap.deletes:\n@@ -170,7 +167,6 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException {\n allowCommits.compareAndSet(true, openMode != EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG);\n if (engineConfig.getRefreshListeners() != null) {\n searcherManager.addListener(engineConfig.getRefreshListeners());\n- engineConfig.getRefreshListeners().setTranslog(translog);\n }\n success = true;\n } finally {\n@@ -951,7 +947,7 @@ private void failOnTragicEvent(AlreadyClosedException ex) {\n failEngine(\"already closed by tragic event on the index writer\", tragedy);\n } else if (translog.isOpen() == false && translog.getTragicException() != null) {\n failEngine(\"already closed by tragic event on the translog\", translog.getTragicException());\n- } else {\n+ } else if (failedEngine.get() == null) { // we are closed but the engine is not failed yet?\n // this smells like a bug - we only expect ACE if we are in a fatal case ie. either translog or IW is closed by\n // a tragic event or has closed itself. if that is not the case we are in a buggy state and raise an assertion error\n throw new AssertionError(\"Unexpected AlreadyClosedException\", ex);", "filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java", "status": "modified" }, { "diff": "@@ -992,14 +992,18 @@ private void internalPerformTranslogRecovery(boolean skipTranslogRecovery, boole\n // but we need to make sure we don't loose deletes until we are done recovering\n config.setEnableGcDeletes(false);\n Engine newEngine = createNewEngine(config);\n+ onNewEngine(newEngine);\n verifyNotClosed();\n if (openMode == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) {\n // We set active because we are now writing operations to the engine; this way, if we go idle after some time and become inactive,\n // we still give sync'd flush a chance to run:\n active.set(true);\n newEngine.recoverFromTranslog();\n }\n+ }\n \n+ protected void onNewEngine(Engine newEngine) {\n+ refreshListeners.setTranslog(newEngine.getTranslog());\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -114,4 +114,9 @@ public void addRefreshListener(Translog.Location location, Consumer<Boolean> lis\n public Store.MetadataSnapshot snapshotStoreMetadata() throws IOException {\n throw new UnsupportedOperationException(\"can't snapshot the directory as the primary may change it underneath us\");\n }\n+\n+ @Override\n+ protected void onNewEngine(Engine newEngine) {\n+ // nothing to do here - the superclass sets the translog on some listeners but we don't have such a thing\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/shard/ShadowIndexShard.java", "status": "modified" }, { "diff": "@@ -59,7 +59,6 @@\n import java.util.List;\n import java.util.Optional;\n import java.util.Set;\n-import java.util.concurrent.ScheduledFuture;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.locks.ReadWriteLock;\n import java.util.concurrent.locks.ReentrantReadWriteLock;\n@@ -112,7 +111,6 @@ public class Translog extends AbstractIndexShardComponent implements IndexShardC\n \n // the list of translog readers is guaranteed to be in order of translog generation\n private final List<TranslogReader> readers = new ArrayList<>();\n- private volatile ScheduledFuture<?> syncScheduler;\n // this is a concurrent set and is not protected by any of the locks. The main reason\n // is that is being accessed by two separate classes (additions & reading are done by Translog, remove by View when closed)\n private final Set<View> outstandingViews = ConcurrentCollections.newConcurrentSet();\n@@ -312,7 +310,6 @@ public void close() throws IOException {\n closeFilesIfNoPendingViews();\n }\n } finally {\n- FutureUtils.cancel(syncScheduler);\n logger.debug(\"translog closed\");\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/translog/Translog.java", "status": "modified" }, { "diff": "@@ -43,6 +43,7 @@\n import org.apache.lucene.index.TieredMergePolicy;\n import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.ReferenceManager;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.TopDocs;\n import org.apache.lucene.search.TotalHitCountCollector;\n@@ -276,15 +277,16 @@ protected InternalEngine createEngine(Store store, Path translogPath) throws IOE\n }\n \n protected InternalEngine createEngine(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy) throws IOException {\n- EngineConfig config = config(indexSettings, store, translogPath, mergePolicy, IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP);\n+ EngineConfig config = config(indexSettings, store, translogPath, mergePolicy, IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null);\n InternalEngine internalEngine = new InternalEngine(config);\n if (config.getOpenMode() == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) {\n internalEngine.recoverFromTranslog();\n }\n return internalEngine;\n }\n \n- public EngineConfig config(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy, long maxUnsafeAutoIdTimestamp) {\n+ public EngineConfig config(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy,\n+ long maxUnsafeAutoIdTimestamp, ReferenceManager.RefreshListener refreshListener) {\n IndexWriterConfig iwc = newIndexWriterConfig();\n TranslogConfig translogConfig = new TranslogConfig(shardId, translogPath, indexSettings, BigArrays.NON_RECYCLING_INSTANCE);\n final EngineConfig.OpenMode openMode;\n@@ -306,7 +308,8 @@ public void onFailedEngine(String reason, @Nullable Exception e) {\n EngineConfig config = new EngineConfig(openMode, shardId, threadPool, indexSettings, null, store, createSnapshotDeletionPolicy(),\n mergePolicy, iwc.getAnalyzer(), iwc.getSimilarity(), new CodecService(null, logger), listener,\n new TranslogHandler(shardId.getIndexName(), logger), IndexSearcher.getDefaultQueryCache(),\n- IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig, TimeValue.timeValueMinutes(5), null, maxUnsafeAutoIdTimestamp);\n+ IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig, TimeValue.timeValueMinutes(5), refreshListener,\n+ maxUnsafeAutoIdTimestamp);\n \n return config;\n }\n@@ -903,7 +906,7 @@ public void testSearchResultRelease() throws Exception {\n public void testSyncedFlush() throws IOException {\n try (Store store = createStore();\n Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(),\n- new LogByteSizeMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP))) {\n+ new LogByteSizeMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null))) {\n final String syncId = randomUnicodeOfCodepointLengthBetween(10, 20);\n ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.index(new Engine.Index(newUid(\"1\"), doc));\n@@ -930,7 +933,7 @@ public void testRenewSyncFlush() throws Exception {\n for (int i = 0; i < iters; i++) {\n try (Store store = createStore();\n InternalEngine engine = new InternalEngine(config(defaultSettings, store, createTempDir(),\n- new LogDocMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP))) {\n+ new LogDocMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null))) {\n final String syncId = randomUnicodeOfCodepointLengthBetween(10, 20);\n ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n Engine.Index doc1 = new Engine.Index(newUid(\"1\"), doc);\n@@ -1158,7 +1161,7 @@ public void testExternalVersioningIndexConflictWithFlush() {\n public void testForceMerge() throws IOException {\n try (Store store = createStore();\n Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(),\n- new LogByteSizeMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP))) { // use log MP here we test some behavior in ESMP\n+ new LogByteSizeMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null))) { // use log MP here we test some behavior in ESMP\n int numDocs = randomIntBetween(10, 100);\n for (int i = 0; i < numDocs; i++) {\n ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), B_1, null);\n@@ -1592,7 +1595,7 @@ public void testIndexWriterIFDInfoStream() throws IllegalAccessException {\n \n public void testEnableGcDeletes() throws Exception {\n try (Store store = createStore();\n- Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(), newMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP))) {\n+ Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(), newMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null))) {\n engine.config().setEnableGcDeletes(false);\n \n // Add document\n@@ -1728,7 +1731,7 @@ public void testMissingTranslog() throws IOException {\n // expected\n }\n // now it should be OK.\n- EngineConfig config = copy(config(defaultSettings, store, primaryTranslogDir, newMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP), EngineConfig.OpenMode.OPEN_INDEX_CREATE_TRANSLOG);\n+ EngineConfig config = copy(config(defaultSettings, store, primaryTranslogDir, newMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null), EngineConfig.OpenMode.OPEN_INDEX_CREATE_TRANSLOG);\n engine = new InternalEngine(config);\n }\n \n@@ -2103,7 +2106,7 @@ public void run() {\n \n public void testCurrentTranslogIDisCommitted() throws IOException {\n try (Store store = createStore()) {\n- EngineConfig config = config(defaultSettings, store, createTempDir(), newMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP);\n+ EngineConfig config = config(defaultSettings, store, createTempDir(), newMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null);\n \n // create\n {\n@@ -2368,15 +2371,15 @@ public void run() {\n public void testEngineMaxTimestampIsInitialized() throws IOException {\n try (Store store = createStore();\n Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(), NoMergePolicy.INSTANCE,\n- IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP))) {\n+ IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null))) {\n assertEquals(IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, engine.segmentsStats(false).getMaxUnsafeAutoIdTimestamp());\n \n }\n \n long maxTimestamp = Math.abs(randomLong());\n try (Store store = createStore();\n Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(), NoMergePolicy.INSTANCE,\n- maxTimestamp))) {\n+ maxTimestamp, null))) {\n assertEquals(maxTimestamp, engine.segmentsStats(false).getMaxUnsafeAutoIdTimestamp());\n }\n }\n@@ -2435,4 +2438,70 @@ public static long getNumVersionLookups(InternalEngine engine) { // for other te\n public static long getNumIndexVersionsLookups(InternalEngine engine) { // for other tests to access this\n return engine.getNumIndexVersionsLookups();\n }\n+\n+ public void testFailEngineOnRandomIO() throws IOException, InterruptedException {\n+ MockDirectoryWrapper wrapper = newMockDirectory();\n+ final Path translogPath = createTempDir(\"testFailEngineOnRandomIO\");\n+ try (Store store = createStore(wrapper)) {\n+ CyclicBarrier join = new CyclicBarrier(2);\n+ CountDownLatch start = new CountDownLatch(1);\n+ AtomicInteger controller = new AtomicInteger(0);\n+ EngineConfig config = config(defaultSettings, store, translogPath, newMergePolicy(),\n+ IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, new ReferenceManager.RefreshListener() {\n+ @Override\n+ public void beforeRefresh() throws IOException {\n+ }\n+\n+ @Override\n+ public void afterRefresh(boolean didRefresh) throws IOException {\n+ int i = controller.incrementAndGet();\n+ if (i == 1) {\n+ throw new MockDirectoryWrapper.FakeIOException();\n+ } else if (i == 2) {\n+ try {\n+ start.await();\n+ } catch (InterruptedException e) {\n+ throw new AssertionError(e);\n+ }\n+ throw new AlreadyClosedException(\"boom\");\n+ }\n+ }\n+ });\n+ InternalEngine internalEngine = new InternalEngine(config);\n+ int docId = 0;\n+ final ParsedDocument doc = testParsedDocument(Integer.toString(docId), Integer.toString(docId), \"test\", null, docId, -1,\n+ testDocumentWithTextField(), new BytesArray(\"{}\".getBytes(Charset.defaultCharset())), null);\n+\n+ Engine.Index index = randomAppendOnly(docId, doc, false);\n+ internalEngine.index(index);\n+ Runnable r = () -> {\n+ try {\n+ join.await();\n+ } catch (Exception e) {\n+ throw new AssertionError(e);\n+ }\n+ try {\n+ internalEngine.refresh(\"test\");\n+ fail();\n+ } catch (EngineClosedException ex) {\n+ // we can't guarantee that we are entering the refresh call before it's fully\n+ // closed so we also expecting ECE here\n+ assertTrue(ex.toString(), ex.getCause() instanceof MockDirectoryWrapper.FakeIOException);\n+ } catch (RefreshFailedEngineException | AlreadyClosedException ex) {\n+ // fine\n+ } finally {\n+ start.countDown();\n+ }\n+\n+ };\n+ Thread t = new Thread(r);\n+ Thread t1 = new Thread(r);\n+ t.start();\n+ t1.start();\n+ t.join();\n+ t1.join();\n+ assertTrue(internalEngine.isClosed.get());\n+ assertTrue(internalEngine.failedEngine.get() instanceof MockDirectoryWrapper.FakeIOException);\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java", "status": "modified" }, { "diff": "@@ -126,6 +126,7 @@ store, new SnapshotDeletionPolicy(new KeepOnlyLastCommitDeletionPolicy()), newMe\n IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig,\n TimeValue.timeValueMinutes(5), listeners, IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP);\n engine = new InternalEngine(config);\n+ listeners.setTranslog(engine.getTranslog());\n }\n \n @After", "filename": "core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java", "status": "modified" } ] }
{ "body": "Currently, we silently accept malformed query where more\nthan one key is defined at the top-level for query object.\nIf all the keys have a valid query body, only the last query\nis executed, besides throwing off parsing for additional suggest,\naggregation or highlighting defined in the search request.\n\nThis commit throws a parsing exception when we encounter a query\nwith multiple keys.\n\ncloses #20500\n", "comments": [ { "body": "LGTM - I love that this is only backed by unittest we came a long way :)\n", "created_at": "2016-09-15T20:42:23Z" }, { "body": "backported to branch [5.0](https://github.com/elastic/elasticsearch/commit/27b36e76bf12871541fa80042545f4219e9a1b40) and [5.x](https://github.com/elastic/elasticsearch/commit/4f5d6966309122887e4117c8d7a2aacc8b4a5543)\n", "created_at": "2016-09-15T20:58:35Z" } ], "number": 20515, "title": "Fix silently accepting malformed queries" }
{ "body": "Followup to #20515 where we added validation that after we parse a query within a query element, we should not get a field name. Truth is that the only token allowed at that point is END_OBJECT, as our DSL allows only one single query within the query object:\n\n```\n{\n \"query\" : {\n \"term\" : { \"field\" : \"value\" }\n }\n}\n```\n\nWe can then check that after parsing of the query we have an end_object that closes the query itself (which we already do). Following that we can check that the query object is immediately closed, as there are no other tokens that can be present in that position.\n\nSee https://github.com/elastic/elasticsearch/pull/19791#discussion-diff-73647510 too\n\nRelates to #20515\n", "number": 20528, "review_comments": [ { "body": "Could you indent this object so it is easier to read?\n", "created_at": "2016-09-16T17:10:10Z" }, { "body": "Having the `(` on this line is pretty strange. Can you move it back up?\n", "created_at": "2016-09-16T17:11:30Z" }, { "body": "yea I blame intellij ;)\n", "created_at": "2016-09-16T17:29:09Z" }, { "body": "ok ok\n", "created_at": "2016-09-16T17:32:34Z" } ], "title": "Throw error if query element doesn't end with END_OBJECT" }
{ "commits": [ { "message": "Throw error if query element doesn't end with END_OBJECT\n\nFollowup to #20515 where we added validation that after we parse a query within a query element, we should not get a field name. Truth is that the only token allowed at that point is END_OBJECT, as our DSL allows only one single query within the query object:\n\n```\n{\n \"query\" : {\n \"term\" : { \"field\" : \"value\" }\n }\n}\n```\n\nWe can then check that after parsing of the query we have an end_object that closes the query itself (which we already do). Following that we can check that the query object is immediately closed, as there are no other tokens that can be present in that position.\n\nRelates to #20515" }, { "message": "review" } ], "files": [ { "diff": "@@ -338,10 +338,6 @@ public static Optional<BoolQueryBuilder> fromXContent(QueryParseContext parseCon\n default:\n throw new ParsingException(parser.getTokenLocation(), \"[bool] query does not support [\" + currentFieldName + \"]\");\n }\n- if (parser.currentToken() != XContentParser.Token.END_OBJECT) {\n- throw new ParsingException(parser.getTokenLocation(),\n- \"expected [END_OBJECT] but got [{}], possibly too many query clauses\", parser.currentToken());\n- }\n } else if (token == XContentParser.Token.START_ARRAY) {\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n switch (currentFieldName) {", "filename": "core/src/main/java/org/elasticsearch/index/query/BoolQueryBuilder.java", "status": "modified" }, { "diff": "@@ -25,11 +25,9 @@\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.logging.DeprecationLogger;\n import org.elasticsearch.common.logging.Loggers;\n-import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.indices.query.IndicesQueriesRegistry;\n import org.elasticsearch.script.Script;\n-import org.elasticsearch.script.ScriptSettings;\n \n import java.io.IOException;\n import java.util.Objects;\n@@ -95,16 +93,12 @@ public QueryBuilder parseTopLevelQueryBuilder() {\n * Parses a query excluding the query element that wraps it\n */\n public Optional<QueryBuilder> parseInnerQueryBuilder() throws IOException {\n- // move to START object\n- XContentParser.Token token;\n if (parser.currentToken() != XContentParser.Token.START_OBJECT) {\n- token = parser.nextToken();\n- if (token != XContentParser.Token.START_OBJECT) {\n+ if (parser.nextToken() != XContentParser.Token.START_OBJECT) {\n throw new ParsingException(parser.getTokenLocation(), \"[_na] query malformed, must start with start_object\");\n }\n }\n- token = parser.nextToken();\n- if (token == XContentParser.Token.END_OBJECT) {\n+ if (parser.nextToken() == XContentParser.Token.END_OBJECT) {\n // we encountered '{}' for a query clause\n String msg = \"query malformed, empty clause found at [\" + parser.getTokenLocation() +\"]\";\n DEPRECATION_LOGGER.deprecated(msg);\n@@ -113,26 +107,26 @@ public Optional<QueryBuilder> parseInnerQueryBuilder() throws IOException {\n }\n return Optional.empty();\n }\n- if (token != XContentParser.Token.FIELD_NAME) {\n+ if (parser.currentToken() != XContentParser.Token.FIELD_NAME) {\n throw new ParsingException(parser.getTokenLocation(), \"[_na] query malformed, no field after start_object\");\n }\n String queryName = parser.currentName();\n // move to the next START_OBJECT\n- token = parser.nextToken();\n- if (token != XContentParser.Token.START_OBJECT) {\n+ if (parser.nextToken() != XContentParser.Token.START_OBJECT) {\n throw new ParsingException(parser.getTokenLocation(), \"[\" + queryName + \"] query malformed, no start_object after query name\");\n }\n @SuppressWarnings(\"unchecked\")\n Optional<QueryBuilder> result = (Optional<QueryBuilder>) indicesQueriesRegistry.lookup(queryName, parseFieldMatcher,\n parser.getTokenLocation()).fromXContent(this);\n+ //end_object of the specific query (e.g. match, multi_match etc.) element\n if (parser.currentToken() != XContentParser.Token.END_OBJECT) {\n throw new ParsingException(parser.getTokenLocation(),\n \"[\" + queryName + \"] malformed query, expected [END_OBJECT] but found [\" + parser.currentToken() + \"]\");\n }\n- parser.nextToken();\n- if (parser.currentToken() == XContentParser.Token.FIELD_NAME) {\n+ //end_object of the query object\n+ if (parser.nextToken() != XContentParser.Token.END_OBJECT) {\n throw new ParsingException(parser.getTokenLocation(),\n- \"[\" + queryName + \"] malformed query, unexpected [FIELD_NAME] found [\" + parser.currentName() + \"]\");\n+ \"[\" + queryName + \"] malformed query, expected [END_OBJECT] but found [\" + parser.currentToken() + \"]\");\n }\n return result;\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java", "status": "modified" }, { "diff": "@@ -365,12 +365,22 @@ public void testUnknownQueryName() throws IOException {\n * test that two queries in object throws error\n */\n public void testTooManyQueriesInObject() throws IOException {\n- String clauseType = randomFrom(new String[] {\"must\", \"should\", \"must_not\", \"filter\"});\n+ String clauseType = randomFrom(\"must\", \"should\", \"must_not\", \"filter\");\n // should also throw error if invalid query is preceded by a valid one\n- String query = \"{\\\"bool\\\" : {\\\"\" + clauseType\n- + \"\\\" : { \\\"match\\\" : { \\\"foo\\\" : \\\"bar\\\" } , \\\"match\\\" : { \\\"baz\\\" : \\\"buzz\\\" } } } }\";\n+ String query = \"{\\n\" +\n+ \" \\\"bool\\\": {\\n\" +\n+ \" \\\"\" + clauseType + \"\\\": {\\n\" +\n+ \" \\\"match\\\": {\\n\" +\n+ \" \\\"foo\\\": \\\"bar\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"match\\\": {\\n\" +\n+ \" \\\"baz\\\": \\\"buzz\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n ParsingException ex = expectThrows(ParsingException.class, () -> parseQuery(query, ParseFieldMatcher.EMPTY));\n- assertEquals(\"[match] malformed query, unexpected [FIELD_NAME] found [match]\", ex.getMessage());\n+ assertEquals(\"[match] malformed query, expected [END_OBJECT] but found [FIELD_NAME]\", ex.getMessage());\n }\n \n public void testRewrite() throws IOException {", "filename": "core/src/test/java/org/elasticsearch/index/query/BoolQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -700,7 +700,7 @@ public void testMalformedQueryFunctionFieldNotSupported() throws IOException {\n expectParsingException(json, \"field [not_supported] is not supported\");\n }\n \n- public void testMalformedQuery() throws IOException {\n+ public void testMalformedQueryMultipleQueryObjects() throws IOException {\n //verify that an error is thrown rather than setting the query twice (https://github.com/elastic/elasticsearch/issues/16583)\n String json = \"{\\n\" +\n \" \\\"function_score\\\":{\\n\" +\n@@ -715,15 +715,34 @@ public void testMalformedQuery() throws IOException {\n \" }\\n\" +\n \" }\\n\" +\n \"}\";\n- expectParsingException(json, equalTo(\"[bool] malformed query, unexpected [FIELD_NAME] found [ignored_field_name]\"));\n+ expectParsingException(json, equalTo(\"[bool] malformed query, expected [END_OBJECT] but found [FIELD_NAME]\"));\n }\n \n- private void expectParsingException(String json, Matcher<String> messageMatcher) {\n+ public void testMalformedQueryMultipleQueryElements() throws IOException {\n+ String json = \"{\\n\" +\n+ \" \\\"function_score\\\":{\\n\" +\n+ \" \\\"query\\\":{\\n\" +\n+ \" \\\"bool\\\":{\\n\" +\n+ \" \\\"must\\\":{\\\"match\\\":{\\\"field\\\":\\\"value\\\"}}\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" \\\"query\\\":{\\n\" +\n+ \" \\\"bool\\\":{\\n\" +\n+ \" \\\"must\\\":{\\\"match\\\":{\\\"field\\\":\\\"value\\\"}}\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+ expectParsingException(json, \"[query] is already defined.\");\n+ }\n+\n+ private static void expectParsingException(String json, Matcher<String> messageMatcher) {\n ParsingException e = expectThrows(ParsingException.class, () -> parseQuery(json));\n assertThat(e.getMessage(), messageMatcher);\n }\n \n- private void expectParsingException(String json, String message) {\n+ private static void expectParsingException(String json, String message) {\n expectParsingException(json, equalTo(\"failed to parse [function_score] query. \" + message));\n }\n ", "filename": "core/src/test/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -423,8 +423,9 @@ public void testParseIncludeExclude() throws IOException {\n }\n }\n \n- public void testInvalid() throws Exception {\n- String restContent = \" { \\\"query\\\": {\\n\" +\n+ public void testMultipleQueryObjectsAreRejected() throws Exception {\n+ String restContent =\n+ \" { \\\"query\\\": {\\n\" +\n \" \\\"multi_match\\\": {\\n\" +\n \" \\\"query\\\": \\\"workd\\\",\\n\" +\n \" \\\"fields\\\": [\\\"title^5\\\", \\\"plain_body\\\"]\\n\" +\n@@ -436,11 +437,9 @@ public void testInvalid() throws Exception {\n \" }\\n\" +\n \" } }\";\n try (XContentParser parser = XContentFactory.xContent(restContent).createParser(restContent)) {\n- SearchSourceBuilder.fromXContent(createParseContext(parser),\n- searchRequestParsers.aggParsers, searchRequestParsers.suggesters, searchRequestParsers.searchExtParsers);\n- fail(\"invalid query syntax multiple keys under query\");\n- } catch (ParsingException e) {\n- assertThat(e.getMessage(), containsString(\"filters\"));\n+ ParsingException e = expectThrows(ParsingException.class, () -> SearchSourceBuilder.fromXContent(createParseContext(parser),\n+ searchRequestParsers.aggParsers, searchRequestParsers.suggesters, searchRequestParsers.searchExtParsers));\n+ assertEquals(\"[multi_match] malformed query, expected [END_OBJECT] but found [FIELD_NAME]\", e.getMessage());\n }\n }\n ", "filename": "core/src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTests.java", "status": "modified" } ] }
{ "body": "The query below only outputs hits and not suggestions. If I drop the query part, I get suggestions, but of course no hits.\n\n``` bash\ncurl -XPOST http://localhost:9200/_search -d '\n{\n \"query\": {\n \"multi_match\": {\n \"query\": \"workd\",\n \"fields\": [\"title^5\", \"plain_body\"]\n },\n \"filters\": {\n \"terms\": {\n \"status\": [ 3 ]\n }\n }\n },\n \"suggest\": {\n \"text\": \"workd\",\n \"title\": {\n \"term\": {\n \"field\": \"title\"\n }\n },\n \"body\": {\n \"term\": {\n \"field\": \"plain_body\"\n }\n }\n }\n}'\n```\n", "comments": [ { "body": "that doesn't sound good @areek can you please take a look?\n", "created_at": "2016-09-15T12:46:18Z" }, { "body": "I am not 100% certain (as I might have made a mistake) but I think that `highlight` is ignored as well when added to the requests. It seems like...\n", "created_at": "2016-09-15T14:01:50Z" }, { "body": "I just moved the query part to the end of the request and the suggestions started working. It seems like someone `brake`-ed out earlier after handling the `query` case.\n", "created_at": "2016-09-15T14:04:37Z" }, { "body": "@itay-grudev can you post the response you are getting from the `_search` endpoint when you define query and suggest?\n", "created_at": "2016-09-15T14:25:54Z" }, { "body": "Which case (suggest/query order)?\n", "created_at": "2016-09-15T14:27:49Z" }, { "body": "This is a bug indeed but I don't think it's related to the suggest part. Your query is not valid but it is not caught by the query parser. `filters` is not recognized as a query and it seems that it breaks the parsing of the whole query. I'll investigate but you should check your queries first in order to remove the part that are not valid. \n", "created_at": "2016-09-15T14:28:32Z" }, { "body": "@jimferenczi I don't think it is a bug with suggesters either. I think it is the query parser as well, as the problem doesn't occur when I change the order of the `query` and `suggest` statements.\n", "created_at": "2016-09-15T14:31:16Z" }, { "body": "@jimferenczi `query` should only accept a single key, not multiple\n", "created_at": "2016-09-15T14:33:29Z" }, { "body": "@clintongormley \n\nI would like to note that if I place the `filters` block in the root object I get the:\n\n``` txt\n{\"error\":{\"root_cause\":[{\"type\":\"parsing_exception\",\"reason\":\"Unknown key for a START_OBJECT in [filters].\",\"line\":1,\"col\":219}],\"type\":\"parsing_exception\",\"reason\":\"Unknown key for a START_OBJECT in [filters].\",\"line\":1,\"col\":219},\"status\":400}\n```\n\nThis also happens if I use `filter` (singular).\n\n@areek P.S. Dumps coming in a minute.\n", "created_at": "2016-09-15T14:38:26Z" }, { "body": "@clintongormley yep but for some reason this is not caught by the parser and we end up with a weird query that is a mix of the two keys (at least in my reproduction scenario). The weird thing is that the validation API rejects the query but the _search endpoint doesn't. I'll investigate ... \n", "created_at": "2016-09-15T14:39:24Z" }, { "body": "@jimferenczi What is the correct syntax for the query? Where should the `suggest`, `filters` and `highlight` parts be?\n", "created_at": "2016-09-15T14:40:37Z" }, { "body": "@areek Dump with query and then suggest: http://pastebin.com/qLgnZ1VQ\nDump with suggest and a query after that: http://pastebin.com/uweRX0pA (working as normal).\n", "created_at": "2016-09-15T14:47:36Z" }, { "body": "@itay-grudev can you try:\n\n```\ncurl -XPOST http://localhost:9200/_search -d '{\n \"query\": {\n \"bool\": {\n \"must\": [{\n \"multi_match\": {\n \"query\": \"workd\",\n \"fields\": [\"title^5\", \"plain_body\"]\n }\n }],\n \"filter\": [{\n \"terms\": {\n \"status\": [3]\n }\n }]\n }\n },\n \"suggest\": {\n \"text\": \"workd\",\n \"title\": {\n \"term\": {\n \"field\": \"title\"\n }\n },\n \"body\": {\n \"term\": {\n \"field\": \"plain_body\"\n }\n }\n }\n}'\n```\n", "created_at": "2016-09-15T14:59:29Z" }, { "body": "This works, yes. Thanks.\n", "created_at": "2016-09-15T15:04:26Z" }, { "body": "My problem is resolved. When you handle your query parser problem feel free to close the issue.\n", "created_at": "2016-09-15T15:15:43Z" } ], "number": 20500, "title": "Query with suggester doesn't output suggestions[v5.0.0a5]" }
{ "body": "Currently, we silently accept malformed query where more\nthan one key is defined at the top-level for query object.\nIf all the keys have a valid query body, only the last query\nis executed, besides throwing off parsing for additional suggest,\naggregation or highlighting defined in the search request.\n\nThis commit throws a parsing exception when we encounter a query\nwith multiple keys.\n\ncloses #20500\n", "number": 20515, "review_comments": [], "title": "Fix silently accepting malformed queries" }
{ "commits": [ { "message": "Fix silently accepting malformed queries\n\nCurrently, we silently accept malformed query where more\nthan one key is defined at the top-level for query object.\nIf all the keys have a valid query body, only the last query\nis executed, besides throwing off parsing for additional suggest,\naggregation or highlighting defined in the search request.\n\nThis commit throws a parsing exception when we encounter a query\nwith multiple keys.\n\ncloses #20500" } ], "files": [ { "diff": "@@ -130,6 +130,10 @@ public Optional<QueryBuilder> parseInnerQueryBuilder() throws IOException {\n \"[\" + queryName + \"] malformed query, expected [END_OBJECT] but found [\" + parser.currentToken() + \"]\");\n }\n parser.nextToken();\n+ if (parser.currentToken() == XContentParser.Token.FIELD_NAME) {\n+ throw new ParsingException(parser.getTokenLocation(),\n+ \"[\" + queryName + \"] malformed query, unexpected [FIELD_NAME] found [\" + parser.currentName() + \"]\");\n+ }\n return result;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java", "status": "modified" }, { "diff": "@@ -370,7 +370,7 @@ public void testTooManyQueriesInObject() throws IOException {\n String query = \"{\\\"bool\\\" : {\\\"\" + clauseType\n + \"\\\" : { \\\"match\\\" : { \\\"foo\\\" : \\\"bar\\\" } , \\\"match\\\" : { \\\"baz\\\" : \\\"buzz\\\" } } } }\";\n ParsingException ex = expectThrows(ParsingException.class, () -> parseQuery(query, ParseFieldMatcher.EMPTY));\n- assertEquals(\"expected [END_OBJECT] but got [FIELD_NAME], possibly too many query clauses\", ex.getMessage());\n+ assertEquals(\"[match] malformed query, unexpected [FIELD_NAME] found [match]\", ex.getMessage());\n }\n \n public void testRewrite() throws IOException {", "filename": "core/src/test/java/org/elasticsearch/index/query/BoolQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -715,7 +715,7 @@ public void testMalformedQuery() throws IOException {\n \" }\\n\" +\n \" }\\n\" +\n \"}\";\n- expectParsingException(json, \"[query] is already defined.\");\n+ expectParsingException(json, equalTo(\"[bool] malformed query, unexpected [FIELD_NAME] found [ignored_field_name]\"));\n }\n \n private void expectParsingException(String json, Matcher<String> messageMatcher) {", "filename": "core/src/test/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.ParseFieldMatcher;\n+import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n@@ -422,6 +423,27 @@ public void testParseIncludeExclude() throws IOException {\n }\n }\n \n+ public void testInvalid() throws Exception {\n+ String restContent = \" { \\\"query\\\": {\\n\" +\n+ \" \\\"multi_match\\\": {\\n\" +\n+ \" \\\"query\\\": \\\"workd\\\",\\n\" +\n+ \" \\\"fields\\\": [\\\"title^5\\\", \\\"plain_body\\\"]\\n\" +\n+ \" },\\n\" +\n+ \" \\\"filters\\\": {\\n\" +\n+ \" \\\"terms\\\": {\\n\" +\n+ \" \\\"status\\\": [ 3 ]\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" } }\";\n+ try (XContentParser parser = XContentFactory.xContent(restContent).createParser(restContent)) {\n+ SearchSourceBuilder.fromXContent(createParseContext(parser),\n+ searchRequestParsers.aggParsers, searchRequestParsers.suggesters, searchRequestParsers.searchExtParsers);\n+ fail(\"invalid query syntax multiple keys under query\");\n+ } catch (ParsingException e) {\n+ assertThat(e.getMessage(), containsString(\"filters\"));\n+ }\n+ }\n+\n public void testParseSort() throws IOException {\n {\n String restContent = \" { \\\"sort\\\": \\\"foo\\\"}\";", "filename": "core/src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.4.0\n\n**Plugins installed**: []\n\n**JVM version**: 1.8.0_91\n\n**OS version**: OS X 10.11.6\n\n**Description of the problem including expected versus actual behavior**:\nWhen I try the following query:\n\n```\n{\n multi_match: {\n fields: [\"_all\"],\n query: \"hel\",\n type: \"phrase_prefix\"\n}\n```\n\nIt doesn't return documents that contains `hel` as prefix but only documents that contains `hel` as a token.\n\n**Steps to reproduce**:\n1. \n\n```\nPOST cleanindex/type\n{\n field: \"hello world\"\n}\n```\n\n 2.\n\n```\nGET cleanindex/type/_search\n{\n query: {\n multi_match: {\n fields: [\"_all\"],\n query: \"hel\",\n type: \"phrase_prefix\"\n }\n }\n}\n```\n\n-> It doesn't return any document, while (I think) it should.\n 3.\n\n```\nGET cleanindex/type/_search\n{\n query: {\n multi_match: {\n fields: [\"_all\"],\n query: \"hello\",\n type: \"phrase_prefix\"\n }\n }\n}\n```\n\n-> It returns the document.\n", "comments": [ { "body": "Thank you for the detailed report @francescoinfante \n\nI opened https://github.com/elastic/elasticsearch/pull/20471\n", "created_at": "2016-09-14T10:07:14Z" } ], "number": 20470, "title": "multi_match with type phrase_prefix query doesn't return all the results on the field _all" }
{ "body": "This change fixes the match_phrase_prefix query when a single term is queried on the _all field.\nIt builds a prefix query instead of an AllTermQuery which would not match any prefix.\n\nFixes #20470\n", "number": 20471, "review_comments": [], "title": "Fix match_phrase_prefix query with single term on _all field" }
{ "commits": [ { "message": "Fix match_phrase_prefix query with single term on _all field\n\nThis change fixes the match_phrase_prefix query when a single term is queried on the _all field.\nIt builds a prefix query instead of an AllTermQuery which would not match any prefix.\n\nFixes #20470" }, { "message": "Add missing change" } ], "files": [ { "diff": "@@ -64,6 +64,10 @@ public AllTermQuery(Term term) {\n this.term = term;\n }\n \n+ public Term getTerm() {\n+ return term;\n+ }\n+\n @Override\n public boolean equals(Object obj) {\n if (sameClassAs(obj) == false) {", "filename": "core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java", "status": "modified" }, { "diff": "@@ -37,6 +37,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.lucene.all.AllTermQuery;\n import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.unit.Fuzziness;\n@@ -324,6 +325,9 @@ public Query createPhrasePrefixQuery(String field, String queryText, int phraseS\n } else if (query instanceof TermQuery) {\n prefixQuery.add(((TermQuery) query).getTerm());\n return prefixQuery;\n+ } else if (query instanceof AllTermQuery) {\n+ prefixQuery.add(((AllTermQuery) query).getTerm());\n+ return prefixQuery;\n }\n return query;\n }", "filename": "core/src/main/java/org/elasticsearch/index/search/MatchQuery.java", "status": "modified" }, { "diff": "@@ -22,15 +22,16 @@\n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.BlendedTermQuery;\n import org.apache.lucene.search.BooleanClause;\n+import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.DisjunctionMaxQuery;\n import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n-import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.compress.CompressedXContent;\n+import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.mapper.MapperService;\n@@ -45,6 +46,8 @@\n import java.util.Arrays;\n \n import static org.elasticsearch.index.query.QueryBuilders.multiMatchQuery;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n \n public class MultiMatchQueryTests extends ESSingleNodeTestCase {\n \n@@ -154,4 +157,13 @@ public Query termQuery(Object value, QueryShardContext context) {\n Query actual = MultiMatchQuery.blendTerm(new BytesRef(\"baz\"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n assertEquals(expected, actual);\n }\n+\n+ public void testMultiMatchPrefixWithAllField() throws IOException {\n+ QueryShardContext queryShardContext = indexService.newQueryShardContext();\n+ queryShardContext.setAllowUnmappedFields(true);\n+ Query parsedQuery =\n+ multiMatchQuery(\"foo\").field(\"_all\").type(MultiMatchQueryBuilder.Type.PHRASE_PREFIX).toQuery(queryShardContext);\n+ assertThat(parsedQuery, instanceOf(MultiPhrasePrefixQuery.class));\n+ assertThat(parsedQuery.toString(), equalTo(\"_all:\\\"foo*\\\"\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/search/MultiMatchQueryTests.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue. Note that whether you're filing a bug report or a\nfeature request, ensure that your submission is for an\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\nBug reports on an OS that we do not support or feature requests\nspecific to an OS that we do not support will be closed.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**:2.3.4\n\n**Plugins installed**: [ 'analysis-ik', 'kopf', ' license', 'marvel-agent' ]\n\n**JVM version**:Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)\n\n**OS version**:Debian 8\n\n**Description of the problem including expected versus actual behavior**:\n\n```\nelasticsearch.yml\nnetwork.host: 10.160.98.78\nnode.name: \"${HOSTNAME}_${NODE_ZONE}\"\ntribe:\n e100:\n cluster.name: logstash-es\n discovery.zen.ping.timeout: 100s\n network.host: 10.160.98.78\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [\"10.120.69.96\", \"10.120.69.97\", ...]\n e101:\n cluster.name: es-new\n discovery.zen.ping.timeout: 100s\n network.host: 10.160.98.78\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [ \"10.63.72.10\", \"10.63.72.11\", ...\" ]\n e102:\n cluster.name: es-102\n discovery.zen.ping.timeout: 100s\n network.host: 10.160.98.78\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [ \"10.63.72.12\" ]\n blocks.indices.write: e100\n on_conflict: prefer_e100\nscript.engine.groovy.inline.search: true\nscript.engine.groovy.inline.aggs: true\npath.plugins: \"/home/elk/running/elasticsearch/plugins_tribe\"\n```\n\n**Steps to reproduce**:\n- When the timer expires night to clean index, tribe node may not work properly (We have 2 tribe node,It seems to work fine `19.36`, `98.78` error)\n\n```\ncurl 10.160.98.78:9200/_cat/indices\n{\"error\":{\"root_cause\":[{\"type\":\"null_pointer_exception\",\"reason\":null}],\"type\":\"null_pointer_exception\",\"reason\":null},\"status\":500}\n```\n\n```\ncurl http://10.63.19.36:9200/_cat/indices\ngreen open g15_zzz-2016.06.30 1 0 3 0 6.8kb 6.8kb\ngreen open g18_tmp_nat_iptables-fileupload 1 0 1172 0 283kb 283kb\ngreen open xy2freeclient-2016.09 1 1 70172 0 13.7mb 6.8mb\ngreen open xy2freeclient-2016.08 1 1 1434906 0 250.1mb 125.1mb\ngreen open appdown_accesslog-2016.08 1 1 107015373 0 34.6gb 17.3gb\n```\n- We use curator to close or delete indices\n\n```\n def delete(self, indices):\n \"\"\"\n :param indices: 删除列表\n :return:\n \"\"\"\n w = indices if isinstance(indices, list) else [indices]\n return curator.delete_indices(self.client, w)\n\n def close(self, indices):\n \"\"\"\n\n :param indices: 关闭列表\n :return:\n \"\"\"\n w = indices if isinstance(indices, list) else [indices]\n return curator.close_indices(self.client, w)\n```\n\n```\n2016-09-01 03:30:24,773 - __main__ - INFO - ---------------------- voicelog start ----------------------\nGET / {} None\nGET / {} None\nGET /_cat/indices/voicelog-2016.08.26 {'h': 'status', u'format': 'json'} None\nGET / {} None\nPOST /voicelog-2016.08.26/_flush/synced {} None\nPOST /voicelog-2016.08.26/_close {'ignore_unavailable': 'true'} None\n2016-09-01 03:30:26,722 - __main__ - INFO - 关闭索引:voicelog-2016.08.26 ret:True\n2016-09-01 03:30:26,722 - __main__ - INFO - 关闭5天前的index:voicelog-2016.08.26 过期天数:6\n```\n- After restart tribe node `98.78` back to normal\n\n**Provide logs (if relevant)**:\n\n```\n[2016-09-01 03:30:26,331][INFO ][tribe ] [elk-edata04-101_tribe] [e100] removing index [voicelog-2016.08.26]\n[2016-09-01 03:30:26,332][WARN ][tribe ] [elk-edata04-101_tribe] failed to process [cluster event from e100, zen-disco-receive(from master [{elk-edata05-100}{Kf1SqrhFR9ywP_fMAB_Jdw}{10.120.69.109}{10.120.69.109:9300}{master=true}])]\njava.lang.NullPointerException\n[2016-09-01 03:30:26,741][WARN ][cluster.service ] [elk-edata04-101_tribe/e100] failed to notify ClusterStateListener\njava.lang.ClassCastException: org.elasticsearch.license.plugin.core.LicensesMetaData cannot be cast to org.elasticsearch.license.plugin.core.LicensesMetaData\n at org.elasticsearch.license.plugin.core.LicensesService.clusterChanged(LicensesService.java:466)\n at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:610)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2016-09-01 03:30:26,748][INFO ][tribe ] [elk-edata04-101_tribe] [e100] removing index [voicelog-2016.08.26]\n[2016-09-01 03:30:26,748][WARN ][tribe ] [elk-edata04-101_tribe] failed to process [cluster event from e100, zen-disco-receive(from master [{elk-edata05-100}{Kf1SqrhFR9ywP_fMAB_Jdw}{10.120.69.109}{10.120.69.109:9300}{master=true}])]\njava.lang.NullPointerException\n[2016-09-01 03:30:31,685][WARN ][cluster.service ] [elk-edata04-101_tribe/e100] failed to notify ClusterStateListener\njava.lang.ClassCastException: org.elasticsearch.license.plugin.core.LicensesMetaData cannot be cast to org.elasticsearch.license.plugin.core.LicensesMetaData\n at org.elasticsearch.license.plugin.core.LicensesService.clusterChanged(LicensesService.java:466)\n at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:610)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2016-09-02 18:28:18,359][WARN ][rest.suppressed ] path: /_cat/indices, params: {}\njava.lang.NullPointerException\n at org.elasticsearch.rest.action.cat.RestIndicesAction.buildTable(RestIndicesAction.java:345)\n at org.elasticsearch.rest.action.cat.RestIndicesAction.access$100(RestIndicesAction.java:52)\n at org.elasticsearch.rest.action.cat.RestIndicesAction$1$1$1.buildResponse(RestIndicesAction.java:111)\n at org.elasticsearch.rest.action.cat.RestIndicesAction$1$1$1.buildResponse(RestIndicesAction.java:108)\n at org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43)\n at org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49)\n at org.elasticsearch.action.support.ThreadedActionListener$1.doRun(ThreadedActionListener.java:89)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n", "comments": [ { "body": "@zcola you're sure you're using 2.3.4 on both tribe nodes and clusters? I ask because this should have been fixed by https://github.com/elastic/elasticsearch/pull/18545\n", "created_at": "2016-09-02T15:09:30Z" }, { "body": "@clintongormley Yes, All nodes are running Version 2.3.4 ,In the tribe node not just request `_cat/indices` return 500, request some index return 404. And are upgrading from 2.3.2 to 2.3.4, such problems (small chance) appears, we do not know there is no correlation?\n", "created_at": "2016-09-03T03:04:22Z" }, { "body": "@areek could you take a look at this please?\n", "created_at": "2016-09-07T15:30:18Z" }, { "body": "@zcola Assuming you are running `_cat/indices` against the tribe node and getting the `NullPointerException` transiently after removing/closing indices in some underlying cluster.\n\nThe tribe node keeps track of all the indices for underlying clusters, when you remove or close indices, the cluster state (of the underlying cluster) gets updated and then the tribe node updates its state subsequently. If a `_cat/indices` request hits the tribe node **before** the tribe nodes updates its state but **after** an underlying cluster closes/removes it's indices. You can run into `NPE` for `_cat/indices` as the closed/removed indices do not have any available shards to report stats from. I will work on a PR to fix this, but in the meantime can you report:\n- if this error is transient\n- does the `_cat/indices` succeed when you point it directly at any underlying cluster\n", "created_at": "2016-09-13T18:09:08Z" }, { "body": "2.4.4 will still appear, we use \"_cat / indices\" as node health monitoring", "created_at": "2017-08-11T02:11:57Z" }, { "body": "Note that a fix for this issue existed in #20464, but needs to be revived to be mergeable.", "created_at": "2017-10-10T00:01:44Z" }, { "body": "Tribe node was removed in #28443, so this issue is no longer relevant.", "created_at": "2018-04-17T19:22:17Z" } ], "number": 20298, "title": "java.lang.NullPointerException in Tribe node" }
{ "body": "Currently, when an index exists in the cluster state but has no shards for reporting stats,\nthe missing stats object cause a `NullPointerException` when requesting the indices stats.\nIn this commit missing stats object for an index are initialized as empty stats instead\nof null, honoring the stats flags set in the stats request. The commit fixes the issue for all\nAPIs that use the indices stats API namely `_cat/indices`, `_cat/shards` and `_stats`.\n\ncloses #20298\n", "number": 20464, "review_comments": [ { "body": "why the all flags? it seems arbitrary? \n", "created_at": "2016-09-14T07:13:47Z" } ], "title": "Fix handling indices stats request when all shards are missing" }
{ "commits": [ { "message": "Fix handling indices stats request when all shards are missing\n\nCurrently, when an index exists in the cluster state but has no shards for reporting stats,\nthe missing stats object cause a `NullPointerException` when requesting the indices stats.\nIn this commit missing stats object for an index are initialized as empty stats instead\nof null, honoring the stats flags set in the stats request. The commit fixes the issue for all\nAPIs that use the indices stats API namely `_cat/indices`, `_cat/shards` and `_stats`.\n\ncloses https://github.com/elastic/elasticsearch/issues/20298" } ], "files": [ { "diff": "@@ -97,14 +97,15 @@ public class CommonStats implements Writeable, ToXContent {\n @Nullable\n public RecoveryStats recoveryStats;\n \n+ private CommonStatsFlags flags;\n+\n public CommonStats() {\n this(CommonStatsFlags.NONE);\n }\n \n public CommonStats(CommonStatsFlags flags) {\n- CommonStatsFlags.Flag[] setFlags = flags.getFlags();\n-\n- for (CommonStatsFlags.Flag flag : setFlags) {\n+ this.flags = flags;\n+ for (CommonStatsFlags.Flag flag : flags.getFlags()) {\n switch (flag) {\n case Docs:\n docs = new DocsStats();\n@@ -164,8 +165,8 @@ public CommonStats(CommonStatsFlags flags) {\n }\n \n public CommonStats(IndicesQueryCache indicesQueryCache, IndexShard indexShard, CommonStatsFlags flags) {\n- CommonStatsFlags.Flag[] setFlags = flags.getFlags();\n- for (CommonStatsFlags.Flag flag : setFlags) {\n+ this.flags = flags;\n+ for (CommonStatsFlags.Flag flag : flags.getFlags()) {\n switch (flag) {\n case Docs:\n docs = indexShard.docStats();\n@@ -225,6 +226,7 @@ public CommonStats(IndicesQueryCache indicesQueryCache, IndexShard indexShard, C\n }\n \n public CommonStats(StreamInput in) throws IOException {\n+ flags = new CommonStatsFlags(in);\n if (in.readBoolean()) {\n docs = DocsStats.readDocStats(in);\n }\n@@ -271,6 +273,7 @@ public CommonStats(StreamInput in) throws IOException {\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n+ flags.writeTo(out);\n if (docs == null) {\n out.writeBoolean(false);\n } else {\n@@ -486,6 +489,10 @@ public void add(CommonStats stats) {\n }\n }\n \n+ public CommonStatsFlags getFlags() {\n+ return this.flags;\n+ }\n+\n @Nullable\n public DocsStats getDocs() {\n return this.docs;", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/stats/CommonStats.java", "status": "modified" }, { "diff": "@@ -36,11 +36,14 @@ public class IndexShardStats implements Iterable<ShardStats>, Streamable {\n \n private ShardStats[] shards;\n \n+ private CommonStatsFlags flags;\n+\n private IndexShardStats() {}\n \n- public IndexShardStats(ShardId shardId, ShardStats[] shards) {\n+ public IndexShardStats(ShardId shardId, CommonStatsFlags flags, ShardStats[] shards) {\n this.shardId = shardId;\n this.shards = shards;\n+ this.flags = flags;\n }\n \n public ShardId getShardId() {\n@@ -63,31 +66,19 @@ public Iterator<ShardStats> iterator() {\n private CommonStats total = null;\n \n public CommonStats getTotal() {\n- if (total != null) {\n- return total;\n- }\n- CommonStats stats = new CommonStats();\n- for (ShardStats shard : shards) {\n- stats.add(shard.getStats());\n+ if (total == null) {\n+ total = ShardStats.calculateTotalStats(shards, flags);\n }\n- total = stats;\n- return stats;\n+ return total;\n }\n \n private CommonStats primary = null;\n \n public CommonStats getPrimary() {\n- if (primary != null) {\n- return primary;\n- }\n- CommonStats stats = new CommonStats();\n- for (ShardStats shard : shards) {\n- if (shard.getShardRouting().primary()) {\n- stats.add(shard.getStats());\n- }\n+ if (primary == null) {\n+ primary = ShardStats.calculatePrimaryStats(shards, flags);\n }\n- primary = stats;\n- return stats;\n+ return primary;\n }\n \n @Override\n@@ -98,6 +89,7 @@ public void readFrom(StreamInput in) throws IOException {\n for (int i = 0; i < shardSize; i++) {\n shards[i] = ShardStats.readShardStats(in);\n }\n+ flags = new CommonStatsFlags(in);\n }\n \n @Override\n@@ -107,6 +99,7 @@ public void writeTo(StreamOutput out) throws IOException {\n for (ShardStats stats : shards) {\n stats.writeTo(out);\n }\n+ flags.writeTo(out);\n }\n \n public static IndexShardStats readIndexShardStats(StreamInput in) throws IOException {", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndexShardStats.java", "status": "modified" }, { "diff": "@@ -33,9 +33,27 @@ public class IndexStats implements Iterable<IndexShardStats> {\n \n private final ShardStats shards[];\n \n- public IndexStats(String index, ShardStats[] shards) {\n+ private final CommonStats total;\n+\n+ private final CommonStats primary;\n+\n+ private final Map<Integer, IndexShardStats> indexShards;\n+\n+ public IndexStats(String index, CommonStatsFlags flags, ShardStats[] shards) {\n this.index = index;\n this.shards = shards;\n+ this.total = ShardStats.calculateTotalStats(shards, flags);\n+ this.primary = ShardStats.calculatePrimaryStats(shards, flags);\n+ Map<Integer, List<ShardStats>> tmpIndexShards = new HashMap<>();\n+ for (ShardStats shard : shards) {\n+ List<ShardStats> shardStatList = tmpIndexShards.computeIfAbsent(shard.getShardRouting().id(), integer -> new ArrayList<>());\n+ shardStatList.add(shard);\n+ }\n+ Map<Integer, IndexShardStats> indexShardList = new HashMap<>();\n+ for (Map.Entry<Integer, List<ShardStats>> entry : tmpIndexShards.entrySet()) {\n+ indexShardList.put(entry.getKey(), new IndexShardStats(entry.getValue().get(0).getShardRouting().shardId(), flags, entry.getValue().toArray(new ShardStats[entry.getValue().size()])));\n+ }\n+ indexShards = indexShardList;\n }\n \n public String getIndex() {\n@@ -46,25 +64,8 @@ public ShardStats[] getShards() {\n return this.shards;\n }\n \n- private Map<Integer, IndexShardStats> indexShards;\n \n public Map<Integer, IndexShardStats> getIndexShards() {\n- if (indexShards != null) {\n- return indexShards;\n- }\n- Map<Integer, List<ShardStats>> tmpIndexShards = new HashMap<>();\n- for (ShardStats shard : shards) {\n- List<ShardStats> lst = tmpIndexShards.get(shard.getShardRouting().id());\n- if (lst == null) {\n- lst = new ArrayList<>();\n- tmpIndexShards.put(shard.getShardRouting().id(), lst);\n- }\n- lst.add(shard);\n- }\n- indexShards = new HashMap<>();\n- for (Map.Entry<Integer, List<ShardStats>> entry : tmpIndexShards.entrySet()) {\n- indexShards.put(entry.getKey(), new IndexShardStats(entry.getValue().get(0).getShardRouting().shardId(), entry.getValue().toArray(new ShardStats[entry.getValue().size()])));\n- }\n return indexShards;\n }\n \n@@ -73,33 +74,12 @@ public Iterator<IndexShardStats> iterator() {\n return getIndexShards().values().iterator();\n }\n \n- private CommonStats total = null;\n \n public CommonStats getTotal() {\n- if (total != null) {\n- return total;\n- }\n- CommonStats stats = new CommonStats();\n- for (ShardStats shard : shards) {\n- stats.add(shard.getStats());\n- }\n- total = stats;\n- return stats;\n+ return total;\n }\n \n- private CommonStats primary = null;\n-\n public CommonStats getPrimaries() {\n- if (primary != null) {\n- return primary;\n- }\n- CommonStats stats = new CommonStats();\n- for (ShardStats shard : shards) {\n- if (shard.getShardRouting().primary()) {\n- stats.add(shard.getStats());\n- }\n- }\n- primary = stats;\n- return stats;\n+ return primary;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndexStats.java", "status": "modified" }, { "diff": "@@ -265,6 +265,10 @@ public IndicesStatsRequest includeSegmentFileSizes(boolean includeSegmentFileSiz\n return this;\n }\n \n+ protected CommonStatsFlags getFlags() {\n+ return flags;\n+ }\n+\n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsRequest.java", "status": "modified" }, { "diff": "@@ -46,13 +46,16 @@ public class IndicesStatsResponse extends BroadcastResponse implements ToXConten\n \n private Map<ShardRouting, ShardStats> shardStatsMap;\n \n+ private CommonStatsFlags flags;\n+\n IndicesStatsResponse() {\n \n }\n \n- IndicesStatsResponse(ShardStats[] shards, int totalShards, int successfulShards, int failedShards, List<ShardOperationFailedException> shardFailures) {\n+ IndicesStatsResponse(CommonStatsFlags flags, ShardStats[] shards, int totalShards, int successfulShards, int failedShards, List<ShardOperationFailedException> shardFailures) {\n super(totalShards, successfulShards, failedShards, shardFailures);\n this.shards = shards;\n+ this.flags = flags;\n }\n \n public Map<ShardRouting, ShardStats> asMap() {\n@@ -98,7 +101,7 @@ public Map<String, IndexStats> getIndices() {\n shards.add(shard);\n }\n }\n- indicesStats.put(indexName, new IndexStats(indexName, shards.toArray(new ShardStats[shards.size()])));\n+ indicesStats.put(indexName, new IndexStats(indexName, flags, shards.toArray(new ShardStats[shards.size()])));\n }\n this.indicesStats = indicesStats;\n return indicesStats;\n@@ -107,31 +110,19 @@ public Map<String, IndexStats> getIndices() {\n private CommonStats total = null;\n \n public CommonStats getTotal() {\n- if (total != null) {\n- return total;\n- }\n- CommonStats stats = new CommonStats();\n- for (ShardStats shard : shards) {\n- stats.add(shard.getStats());\n+ if (total == null) {\n+ total = ShardStats.calculateTotalStats(shards, flags);\n }\n- total = stats;\n- return stats;\n+ return total;\n }\n \n private CommonStats primary = null;\n \n public CommonStats getPrimaries() {\n- if (primary != null) {\n- return primary;\n- }\n- CommonStats stats = new CommonStats();\n- for (ShardStats shard : shards) {\n- if (shard.getShardRouting().primary()) {\n- stats.add(shard.getStats());\n- }\n+ if (primary == null) {\n+ primary = ShardStats.calculatePrimaryStats(shards, flags);\n }\n- primary = stats;\n- return stats;\n+ return primary;\n }\n \n @Override\n@@ -141,6 +132,7 @@ public void readFrom(StreamInput in) throws IOException {\n for (int i = 0; i < shards.length; i++) {\n shards[i] = ShardStats.readShardStats(in);\n }\n+ flags = new CommonStatsFlags(in);\n }\n \n @Override\n@@ -150,6 +142,7 @@ public void writeTo(StreamOutput out) throws IOException {\n for (ShardStats shard : shards) {\n shard.writeTo(out);\n }\n+ flags.writeTo(out);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsResponse.java", "status": "modified" }, { "diff": "@@ -54,6 +54,30 @@ public ShardStats(ShardRouting routing, ShardPath shardPath, CommonStats commonS\n this.commonStats = commonStats;\n }\n \n+ /** calculates primary stats for shard stats */\n+ static CommonStats calculatePrimaryStats(ShardStats[] shards, CommonStatsFlags flags) {\n+ CommonStats primaryStats = new CommonStats();\n+ boolean primaryFound = false;\n+ for (ShardStats shard : shards) {\n+ if (shard.getShardRouting().primary()) {\n+ primaryStats.add(shard.getStats());\n+ primaryFound = true;\n+ }\n+ }\n+ return primaryFound ? primaryStats : new CommonStats(flags);\n+ }\n+\n+ /** calculates total stats for shard stats */\n+ static CommonStats calculateTotalStats(ShardStats[] shards, CommonStatsFlags flags) {\n+ CommonStats totalStats = new CommonStats();\n+ boolean shardFound = false;\n+ for (ShardStats shard : shards) {\n+ totalStats.add(shard.getStats());\n+ shardFound = true;\n+ }\n+ return shardFound ? totalStats : new CommonStats(flags);\n+ }\n+\n /**\n * The shard routing information (cluster wide shard state).\n */", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java", "status": "modified" }, { "diff": "@@ -82,7 +82,7 @@ protected ShardStats readShardResult(StreamInput in) throws IOException {\n \n @Override\n protected IndicesStatsResponse newResponse(IndicesStatsRequest request, int totalShards, int successfulShards, int failedShards, List<ShardStats> responses, List<ShardOperationFailedException> shardFailures, ClusterState clusterState) {\n- return new IndicesStatsResponse(responses.toArray(new ShardStats[responses.size()]), totalShards, successfulShards, failedShards, shardFailures);\n+ return new IndicesStatsResponse(request.getFlags(), responses.toArray(new ShardStats[responses.size()]), totalShards, successfulShards, failedShards, shardFailures);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/stats/TransportIndicesStatsAction.java", "status": "modified" }, { "diff": "@@ -52,17 +52,10 @@ private CommitStats() {\n \n }\n \n- public static CommitStats readCommitStatsFrom(StreamInput in) throws IOException {\n- CommitStats commitStats = new CommitStats();\n- commitStats.readFrom(in);\n- return commitStats;\n- }\n-\n public static CommitStats readOptionalCommitStatsFrom(StreamInput in) throws IOException {\n return in.readOptionalStreamable(CommitStats::new);\n }\n \n-\n public Map<String, String> getUserData() {\n return userData;\n }", "filename": "core/src/main/java/org/elasticsearch/index/engine/CommitStats.java", "status": "modified" }, { "diff": "@@ -290,7 +290,8 @@ public NodeIndicesStats stats(boolean includePrevious, CommonStatsFlags flags) {\n if (indexShard.routingEntry() == null) {\n continue;\n }\n- IndexShardStats indexShardStats = new IndexShardStats(indexShard.shardId(), new ShardStats[] { new ShardStats(indexShard.routingEntry(), indexShard.shardPath(), new CommonStats(indicesQueryCache, indexShard, flags), indexShard.commitStats()) });\n+ IndexShardStats indexShardStats = new IndexShardStats(indexShard.shardId(), flags,\n+ new ShardStats[] { new ShardStats(indexShard.routingEntry(), indexShard.shardPath(), new CommonStats(indicesQueryCache, indexShard, flags), indexShard.commitStats()) });\n if (!statsByShard.containsKey(indexService.index())) {\n statsByShard.put(indexService.index(), arrayAsArrayList(indexShardStats));\n } else {", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -230,17 +230,13 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n private Map<Index, CommonStats> createStatsByIndex() {\n Map<Index, CommonStats> statsMap = new HashMap<>();\n for (Map.Entry<Index, List<IndexShardStats>> entry : statsByShard.entrySet()) {\n- if (!statsMap.containsKey(entry.getKey())) {\n- statsMap.put(entry.getKey(), new CommonStats());\n- }\n-\n+ CommonStats indexStats = statsMap.computeIfAbsent(entry.getKey(), index -> new CommonStats());\n for (IndexShardStats indexShardStats : entry.getValue()) {\n for (ShardStats shardStats : indexShardStats.getShards()) {\n- statsMap.get(entry.getKey()).add(shardStats.getStats());\n+ indexStats.add(shardStats.getStats());\n }\n }\n }\n-\n return statsMap;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java", "status": "modified" }, { "diff": "@@ -114,9 +114,9 @@ public void testCommitStats() throws Exception {\n /**\n * Gives access to package private IndicesStatsResponse constructor for test purpose.\n **/\n- public static IndicesStatsResponse newIndicesStatsResponse(ShardStats[] shards, int totalShards, int successfulShards,\n+ public static IndicesStatsResponse newIndicesStatsResponse(CommonStatsFlags flags, ShardStats[] shards, int totalShards, int successfulShards,\n int failedShards, List<ShardOperationFailedException> shardFailures) {\n- return new IndicesStatsResponse(shards, totalShards, successfulShards, failedShards, shardFailures);\n+ return new IndicesStatsResponse(flags, shards, totalShards, successfulShards, failedShards, shardFailures);\n }\n \n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsTests.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.indices.stats.CommonStats;\n+import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsTests;\n import org.elasticsearch.action.admin.indices.stats.ShardStats;\n@@ -143,25 +144,28 @@ private IndicesStatsResponse randomIndicesStatsResponse(final Index[] indices) {\n );\n shardRouting = shardRouting.initialize(\"node-0\", null, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE);\n shardRouting = shardRouting.moveToStarted();\n- CommonStats stats = new CommonStats();\n- stats.fieldData = new FieldDataStats();\n- stats.queryCache = new QueryCacheStats();\n- stats.docs = new DocsStats();\n- stats.store = new StoreStats();\n- stats.indexing = new IndexingStats();\n- stats.search = new SearchStats();\n- stats.segments = new SegmentsStats();\n- stats.merge = new MergeStats();\n- stats.refresh = new RefreshStats();\n- stats.completion = new CompletionStats();\n- stats.requestCache = new RequestCacheStats();\n- stats.get = new GetStats();\n- stats.flush = new FlushStats();\n- stats.warmer = new WarmerStats();\n+ CommonStats stats = new CommonStats(CommonStatsFlags.ALL);\n+ // rarely none of the stats fields would be initialized due to the index missing all shards for reporting stats\n+ if (frequently()) {\n+ stats.fieldData = new FieldDataStats();\n+ stats.queryCache = new QueryCacheStats();\n+ stats.docs = new DocsStats();\n+ stats.store = new StoreStats();\n+ stats.indexing = new IndexingStats();\n+ stats.search = new SearchStats();\n+ stats.segments = new SegmentsStats();\n+ stats.merge = new MergeStats();\n+ stats.refresh = new RefreshStats();\n+ stats.completion = new CompletionStats();\n+ stats.requestCache = new RequestCacheStats();\n+ stats.get = new GetStats();\n+ stats.flush = new FlushStats();\n+ stats.warmer = new WarmerStats();\n+ }\n shardStats.add(new ShardStats(shardRouting, new ShardPath(false, path, path, shardId), stats, null));\n }\n }\n- return IndicesStatsTests.newIndicesStatsResponse(\n+ return IndicesStatsTests.newIndicesStatsResponse(CommonStatsFlags.ALL,\n shardStats.toArray(new ShardStats[shardStats.size()]), shardStats.size(), shardStats.size(), 0, emptyList()\n );\n }", "filename": "core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java", "status": "modified" } ] }
{ "body": "There is an inconsistency in `match` query in simple form and with `query` parameter. The latter gives an erroneous result with a list input whereas the former refuses the list input with parse error.\n\n```\nGET /.../.../_search\n{\n \"query\": {\n \"match\": {\n \"skills\": [\"python\", \"ruby\"]\n }\n }\n}\n```\n\nresults an parse error, as expected.\n\n```\nGET /.../.../_search\n{\n \"query\": {\n \"match\": {\n \"skills\": {\n \"query\": [\"python\", \"ruby\"]\n }\n }\n }\n}\n```\n\ngives the same output as with `\"query\": [\"xxx\",\"yyy\", \"ruby\"]` or `\"query\": [\"ruby\"]`, considering ONLY the last item of the list and ignoring the rest.\n", "comments": [ { "body": "This is broke in 2.2, but may already be fixed in master?\n", "created_at": "2016-01-10T17:25:07Z" }, { "body": "I can confirm the above incorrect behavior on both 2.1.1 and 2.2.\n\nOn current `master` (4c1e93bd89c), both forms raise errors, albeit slightly differently:\n\n#### Shorthand form\n\n``` sh\ncurl -X POST 'http://localhost:9200/company/employee/_search?pretty=true' -d '\n{\n \"query\": {\n \"match\": {\n \"skills\": [\"python\", \"ruby\"]\n }\n }\n}'\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"illegal_state_exception\",\n \"reason\" : \"Can't get text on a START_ARRAY at 5:13\"\n } ],\n \"type\" : \"illegal_state_exception\",\n \"reason\" : \"Can't get text on a START_ARRAY at 5:13\"\n },\n \"status\" : 500\n}\n```\n\n#### With \"query\" param\n\n``` sh\ncurl -X POST 'http://localhost:9200/company/employee/_search?pretty=true' -d '\n{\n \"query\": {\n \"match\": {\n \"skills\": {\n \"query\": [\"python\", \"ruby\"]\n }\n }\n }\n}'\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"parsing_exception\",\n \"reason\" : \"[match] unknown token [START_ARRAY] after [query]\",\n \"line\" : 6,\n \"col\" : 17\n } ],\n \"type\" : \"parsing_exception\",\n \"reason\" : \"[match] unknown token [START_ARRAY] after [query]\",\n \"line\" : 6,\n \"col\" : 17\n },\n \"status\" : 400\n}\n```\n", "created_at": "2016-01-25T20:10:55Z" }, { "body": "you should use terms to match mutiply items.\nGET /.../.../_search\n{\n \"query\": {\n \"terms\": {\n \"skills\": [\"python\", \"ruby\"] \n }\n }\n}\n", "created_at": "2016-08-04T01:47:12Z" }, { "body": "fixed in 2.4 branch with https://github.com/elastic/elasticsearch/commit/6e5260d2c493a1461f9180b5004eceddce1e89c9.\n\nAlso added specific tests in master: https://github.com/elastic/elasticsearch/commit/7894eba2b3fd17bd035b91ec02c1205ef738d6e0.\n", "created_at": "2016-09-13T11:20:14Z" } ], "number": 15741, "title": "\"match\" with \"query\" parameter accepts list input and gives wrong results " }
{ "body": "Fail parsing when match query contains an array of terms. This is already the case in master and 5.0 branches, we made the change as part of the query refactoring. It is worth to backport this tiny fix and fail given that only one of the terms in the array is considered at the moment.\n\nCloses #15741\n", "number": 20445, "review_comments": [], "title": "Fail parsing when match query contains an array of terms" }
{ "commits": [ { "message": "Fail parsing when match query contains an array of terms\n\nCloses #15741" } ], "files": [ { "diff": "@@ -145,6 +145,9 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n } else {\n throw new QueryParsingException(parseContext, \"[match] query does not support [\" + currentFieldName + \"]\");\n }\n+ } else {\n+ throw new QueryParsingException(parseContext,\n+ \"[\" + NAME + \"] unknown token [\" + token + \"] after [\" + currentFieldName + \"]\");\n }\n }\n parser.nextToken();", "filename": "core/src/main/java/org/elasticsearch/index/query/MatchQueryParser.java", "status": "modified" }, { "diff": "@@ -2624,7 +2624,37 @@ public void testSimpleQueryStringNoFields() throws Exception {\n assertThat(termQuery.getTerm(), equalTo(new Term(MetaData.ALL, queryText)));\n }\n \n- private void assertGeoDistanceRangeQuery(IndexQueryParserService queryParser, Query query, double lat, double lon, double distance, DistanceUnit distanceUnit) throws IOException {\n+ @Test\n+ public void testMatchQueryParseFailsWithTermsArray() throws Exception {\n+ IndexQueryParserService queryParser = queryParser();\n+ String json1 = \"{\\n\" +\n+ \" \\\"match\\\" : {\\n\" +\n+ \" \\\"message1\\\" : {\\n\" +\n+ \" \\\"query\\\" : [\\\"term1\\\", \\\"term2\\\"]\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+ try {\n+ queryParser.parse(json1);\n+ fail(\"parse should have failed\");\n+ } catch(QueryParsingException e) {\n+ //all good\n+ }\n+\n+ String json2 = \"{\\n\" +\n+ \" \\\"match\\\" : {\\n\" +\n+ \" \\\"message1\\\" : [\\\"term1\\\", \\\"term2\\\"]\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+ try {\n+ queryParser.parse(json2);\n+ fail(\"parse should have failed\");\n+ } catch(QueryParsingException e) {\n+ //all good\n+ }\n+ }\n+\n+ private static void assertGeoDistanceRangeQuery(IndexQueryParserService queryParser, Query query, double lat, double lon, double distance, DistanceUnit distanceUnit) throws IOException {\n if (queryParser.getIndexCreatedVersion().before(Version.V_2_2_0)) {\n assertThat(query, instanceOf(GeoDistanceRangeQuery.class));\n GeoDistanceRangeQuery q = (GeoDistanceRangeQuery) query;\n@@ -2643,7 +2673,7 @@ private void assertGeoDistanceRangeQuery(IndexQueryParserService queryParser, Qu\n }\n }\n \n- private void assertGeoBBoxQuery(IndexQueryParserService queryParser, Query query, double maxLat, double minLon, double minLat, double maxLon) {\n+ private static void assertGeoBBoxQuery(IndexQueryParserService queryParser, Query query, double maxLat, double minLon, double minLat, double maxLon) {\n if (queryParser.getIndexCreatedVersion().before(Version.V_2_2_0)) {\n assertThat(query, instanceOf(InMemoryGeoBoundingBoxQuery.class));\n InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) query;\n@@ -2671,7 +2701,7 @@ private void assertGeoBBoxQuery(IndexQueryParserService queryParser, Query quer\n }\n }\n \n- private void assertGeoPolygonQuery(IndexQueryParserService queryParser, Query query) {\n+ private static void assertGeoPolygonQuery(IndexQueryParserService queryParser, Query query) {\n if (queryParser.getIndexCreatedVersion().before(Version.V_2_2_0)) {\n assertThat(query, instanceOf(GeoPolygonQuery.class));\n GeoPolygonQuery filter = (GeoPolygonQuery) query;", "filename": "core/src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.4.0\n\n**Plugins installed**: [Marvel,Head,License,Delete-by-query]\n\n**JVM version**: 1.8.0_91\n\n**OS version**: linux ubuntu 14.04 - 4.5.7\n\n**Description of the problem including expected versus actual behavior**:\n\nI have 8 indexes, called url-0,url-1,...,url-8 with around 1B documents.\nThe template for each index is \n\n```\n{\n \"template\" : \"url-*\",\n \"settings\" : { \n \"number_of_shards\" : \"1\",\n \"number_of_replicas\" : \"0\",\n \"refresh_interval\" : \"180s\"\n },\n \"mappings\" : {\n \"url\" : {\n \"_all\" : {\"enabled\" : false},\n \"_timestamp\" : {\"enabled\" : false},\n \"_source\": {\"enabled\": false},\n \"_ttl\" : { \"enabled\" : false },\n \"properties\" : {\n \"url\" : {\"type\" : \"string\", \"index\": \"not_analyzed\", \"store\" : true},\n \"domain\" : {\"type\": \"string\", \"index\" : \"not_analyzed\", \"store\" : false},\n \"scheme\" : {\"type\": \"string\", \"index\" : \"not_analyzed\", \"store\" : false},\n \"parsedUrl\" : {\"type\": \"string\", \"index\": \"analyzed\", \"store\": false},\n \"workerIndex\": {\"type\": \"integer\", \"store\" : false},\n \"error\": {\"type\": \"boolean\", \"store\" : false},\n \"banned\": {\"type\": \"boolean\", \"store\" : false},\n \"timestamp\": {\"type\": \"date\", \"store\" : true},\n \"depth\": {\"type\": \"integer\", \"store\" : true},\n \"processed\": {\"type\": \"boolean\", \"store\" : false}\n\n }\n }\n }\n```\n\nWhen I execute the query\n\n`curl -XPOST localhost:9200/url/url/_search?search_type=scan&scroll=5m&size=50&_source=url&preference=_shards:0;_prefer_node:XqfWWAGFRdCPo98qIeFfGg`\n\nI get\n\n```\n{\n\n \"_scroll_id\": \"c2Nhbjs4OzMyNzUyNzpYcWZXV0FHRlJkQ1BvOThxSWVGZkdnOzQ3MTkzODplTDZZa3AxSlRVQ3lHd1FMTjR3T3NnOzgxMzY4ODpwSHVGaVJRM1JSYTIwaWRxNTlFbXNBOzgxMzY4NzpwSHVGaVJRM1JSYTIwaWRxNTlFbXNBOzgxMzY4NjpwSHVGaVJRM1JSYTIwaWRxNTlFbXNBOzQ3MTkzOTplTDZZa3AxSlRVQ3lHd1FMTjR3T3NnOzMxNDc1NDpqbUdTWGRmWVRTdWFIT1FFSmZ1WVN3OzgxMzY4OTpwSHVGaVJRM1JSYTIwaWRxNTlFbXNBOzE7dG90YWxfaGl0czoxMDEzOTAyNTk1Ow==\",\n \"took\": 8221,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 8,\n \"successful\": 8,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 1013902595,\n \"max_score\": 0,\n \"hits\": [ ]\n }\n}\n```\n\nbut\n\nIf I query for the scrollId I get an error and a NPE on the server.\n\n```\nculr -XPOST localhost:9200/_search/scroll?scroll=1m -d '\n{\n \"scroll_id\": \"c2Nhbjs4OzMyNzUyNzpYcWZXV0FHRlJkQ1BvOThxSWVGZkdnOzQ3MTkzODplTDZZa3AxSlRVQ3lHd1FMTjR3T3NnOzgxMzY4ODpwSHVGaVJRM1JSYTIwaWRxNTlFbXNBOzgxMzY4NzpwSHVGaVJRM1JSYTIwaWRxNTlFbXNBOzgxMzY4NjpwSHVGaVJRM1JSYTIwaWRxNTlFbXNBOzQ3MTkzOTplTDZZa3AxSlRVQ3lHd1FMTjR3T3NnOzMxNDc1NDpqbUdTWGRmWVRTdWFIT1FFSmZ1WVN3OzgxMzY4OTpwSHVGaVJRM1JSYTIwaWRxNTlFbXNBOzE7dG90YWxfaGl0czoxMDEzOTAyNTk1Ow\"\n}'\n```\n\nresult:\n\n```\n{\n\n \"_scroll_id\": \"c2NhbjswOzE7dG90YWxfaGl0czoxMDEzOTAyNTk1Ow==\",\n \"took\": 105,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 8,\n \"successful\": 0,\n \"failed\": 8,\n \"failures\": [\n {\n \"shard\": -1,\n \"index\": null,\n \"reason\": {\n \"type\": \"null_pointer_exception\",\n \"reason\": null\n }\n }\n ]\n },\n \"hits\": {\n \"total\": 1013902595,\n \"max_score\": 0,\n \"hits\": [ ]\n }\n\n}\n```\n\nAnd in the logs:\n\n```\nRemoteTransportException[[es-1][][indices:data/read/search[phase/scan/scroll]]]; nested: NullPointerException;\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.search.fetch.source.FetchSourceSubPhase.hitExecute(FetchSourceSubPhase.java:79)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:188)\n at org.elasticsearch.search.SearchService.executeScan(SearchService.java:342)\n```\n\nMaybe it is a problem related to the **_source** parameter in the query and the fact that the _source field is disabled in the mapping. If I remove the **_source** field from the query, it works fine.\n\nPS: The above query was originated by Spark with the elasticsearch-spark_2.11 (v.2.4.0) library.\n", "comments": [ { "body": "This looks very similar to #20093\n", "created_at": "2016-09-12T07:15:24Z" }, { "body": "Yes I agree. It affects v2.4 too. \n", "created_at": "2016-09-12T08:50:07Z" }, { "body": "thanks for pointing this out. Fixed by #20093 . Fix will be part of the next 5.0 release. 5.0.0-alpha5 is still affected. \n\nThe problem manifested only when _source was asked for explicitly. I am not too happy with not returning any error and simply skipping the _source part of the request, I would have rather thrown an error there. Thoughts?\n", "created_at": "2016-09-12T09:32:19Z" }, { "body": "> The problem manifested only when _source was asked for explicitly. I am not too happy with not returning any error and simply skipping the _source part of the request, I would have rather thrown an error there. Thoughts?\n\nI hate to say that but I think it should be configurable on a per request basis? When querying multiple indices you might want to return `foo` for some docs and `bar` for other docs while searching on a third field. And sometimes you query well defined indices and a missing source field must just throw an error in that case.\n", "created_at": "2016-09-12T09:45:52Z" }, { "body": "@tlrx what should be configurable? The behaviour? Either throw an error or skip without barfing? I am afraid I don't follow why skipping some part of the request that cannot be performed would help. You are going to run into problems when reading the response anyway as some part will be missing.\n", "created_at": "2016-09-12T10:02:11Z" }, { "body": "In my case, _source is disabled but some fields are still stored. So it is definitely still possible to retrieve some fields event though _source is disabled.\n", "created_at": "2016-09-12T10:17:55Z" }, { "body": "@marfago then you should use `fields` rather than `_source`. `_source` will always try to extract fields out of the source field and will return only those rather than the whole source. Fields will instead get the stored fields from lucene directly. Not sure whether the spark integration supports any of this.\n", "created_at": "2016-09-12T10:21:43Z" }, { "body": "@javanna thank you for the clarification. \nWhat if:\n1)keep as default the current behaviour, just handling/wrapping the NPE with a more meaningful message/exception \n2)introduce a boolean query parameter (a sort of ignore_disabled_source) in order to change behaviour and silently ignore indexes with disabled _source\n3)open a new issue in elasticsearch-spark for handling the \"fields\" parameter.\n", "created_at": "2016-09-12T11:31:41Z" }, { "body": "> 1)keep as default the current behaviour, just handling/wrapping the NPE with a more meaningful message/exception \n\nthe current behaviour after the fix is \"ignore the _source parameter if _source is disabled\", I don't like that\n\n> 2)introduce a boolean query parameter (a sort of ignore_disabled_source) in order to change behaviour and silently ignore indexes with disabled _source\n\nI tend to be against this, I don't think it is worth adding yet another request parameter for such an edge case, in the spirit of #11172.\n\n> 3)open a new issue in elasticsearch-spark for handling the \"fields\" parameter.\n\nthanks for doing that ;)\n", "created_at": "2016-09-12T12:07:20Z" }, { "body": "> I tend to be against this, I don't think it is worth adding yet another request parameter for such an edge case, in the spirit of #11172.\n\nThis is a very good point @javanna and the rationale behind #11172 is good too. I agree we should throw an error here but I'd like to be sure how we handle similar case with `fields` or `stored_fields`.\n", "created_at": "2016-09-12T12:21:42Z" }, { "body": "thanks Tanguy, maybe @clintongormley has some opinion too on this\n", "created_at": "2016-09-12T12:23:04Z" }, { "body": "++ to what @javanna said. If you disable source and you use a feature that needs it you fail. Hard!\n", "created_at": "2016-09-12T12:59:42Z" }, { "body": "@javanna and @s1monw In my use case (see the description), the first query succeeds returning an apparently valid scrollId, then the second query fails because the _source is disabled. \nI think it should fail earlier, preventing the scrollId to be created.\n", "created_at": "2016-09-12T14:51:09Z" }, { "body": "I see what you mean @marfago , I am afraid what you request is not that easy to achieve on our end, but I will check whether we already have some validation mechanism in place that we can leverage. The problem is that we throw the error once we try and fetch fields from the source, which is part of the fetch phase. The first phase of search, query, works without any issue. Once the top hits need to be fetched, the problem arises. That is why the first scroll request succeeds in your case, cause there is no fetch phase as part of that first step.\n", "created_at": "2016-09-12T15:03:15Z" } ], "number": 20408, "title": "NPE when fetching source from an index with _source disabled" }
{ "body": "With #20093 we fixed a NPE thrown when using _source include/exclude and source is disabled in the mappings. Fixing meant ignoring the _source parameter in the request as no fields can be extracted from it.\n\nWe should rather throw a clear exception to point out that we cannot extract fields from _source. Note that this happens only when explicitly trying to extract fields from source. When source is disabled and no _source parameter is specified, no errors will be thrown and no source will be returned.\n\nCloses #20408\n", "number": 20424, "review_comments": [], "title": "Throw error when trying to fetch fields from source and source is disabled" }
{ "commits": [ { "message": "With #20093 we fixed a NPE thrown when using _source include/exclude and source is disabled in the mappings. Fixing meant ignoring the _source parameter in the request as no fields can be extracted from it.\n\nWe should rather throw a clear exception to clearly point out that we cannot extract fields from _source. Note that this happens only when explicitly trying to extract fields from source. When source is disabled and no _source parameter is specified, no errors will be thrown and no source will be returned.\n\nCloses #20408\nRelates to #20093" } ], "files": [ { "diff": "@@ -36,16 +36,18 @@ public void hitExecute(SearchContext context, HitContext hitContext) {\n return;\n }\n SourceLookup source = context.lookup().source();\n- if (source.internalSourceRef() == null) {\n- return; // source disabled in the mapping\n- }\n FetchSourceContext fetchSourceContext = context.fetchSourceContext();\n assert fetchSourceContext.fetchSource();\n if (fetchSourceContext.includes().length == 0 && fetchSourceContext.excludes().length == 0) {\n hitContext.hit().sourceRef(source.internalSourceRef());\n return;\n }\n \n+ if (source.internalSourceRef() == null) {\n+ throw new IllegalArgumentException(\"unable to fetch fields from _source field: _source is disabled in the mappings \" +\n+ \"for index [\" + context.indexShard().shardId().getIndexName() + \"]\");\n+ }\n+\n Object value = source.filter(fetchSourceContext.includes(), fetchSourceContext.excludes());\n try {\n final int initialCapacity = Math.min(1024, source.internalSourceRef().length());", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java", "status": "modified" }, { "diff": "@@ -23,6 +23,8 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.search.fetch.FetchSubPhase;\n import org.elasticsearch.search.internal.InternalSearchHit;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -33,36 +35,10 @@\n import java.io.IOException;\n import java.util.Collections;\n \n-public class FetchSourceSubPhaseTests extends ESTestCase {\n-\n- static class FetchSourceSubPhaseTestSearchContext extends TestSearchContext {\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n \n- FetchSourceContext context;\n- BytesReference source;\n-\n- FetchSourceSubPhaseTestSearchContext(FetchSourceContext context, BytesReference source) {\n- super(null);\n- this.context = context;\n- this.source = source;\n- }\n-\n- @Override\n- public boolean sourceRequested() {\n- return context != null && context.fetchSource();\n- }\n-\n- @Override\n- public FetchSourceContext fetchSourceContext() {\n- return context;\n- }\n-\n- @Override\n- public SearchLookup lookup() {\n- SearchLookup lookup = super.lookup();\n- lookup.source().setSource(source);\n- return lookup;\n- }\n- }\n+public class FetchSourceSubPhaseTests extends ESTestCase {\n \n public void testFetchSource() throws IOException {\n XContentBuilder source = XContentFactory.jsonBuilder().startObject()\n@@ -109,11 +85,14 @@ public void testSourceDisabled() throws IOException {\n hitContext = hitExecute(null, false, null, null);\n assertNull(hitContext.hit().sourceAsMap());\n \n- hitContext = hitExecute(null, true, \"field1\", null);\n- assertNull(hitContext.hit().sourceAsMap());\n+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> hitExecute(null, true, \"field1\", null));\n+ assertEquals(\"unable to fetch fields from _source field: _source is disabled in the mappings \" +\n+ \"for index [index]\", exception.getMessage());\n \n- hitContext = hitExecuteMultiple(null, true, new String[]{\"*\"}, new String[]{\"field2\"});\n- assertNull(hitContext.hit().sourceAsMap());\n+ exception = expectThrows(IllegalArgumentException.class,\n+ () -> hitExecuteMultiple(null, true, new String[]{\"*\"}, new String[]{\"field2\"}));\n+ assertEquals(\"unable to fetch fields from _source field: _source is disabled in the mappings \" +\n+ \"for index [index]\", exception.getMessage());\n }\n \n private FetchSubPhase.HitContext hitExecute(XContentBuilder source, boolean fetchSource, String include, String exclude) {\n@@ -131,4 +110,40 @@ private FetchSubPhase.HitContext hitExecuteMultiple(XContentBuilder source, bool\n phase.hitExecute(searchContext, hitContext);\n return hitContext;\n }\n+\n+ private static class FetchSourceSubPhaseTestSearchContext extends TestSearchContext {\n+ final FetchSourceContext context;\n+ final BytesReference source;\n+ final IndexShard indexShard;\n+\n+ FetchSourceSubPhaseTestSearchContext(FetchSourceContext context, BytesReference source) {\n+ super(null);\n+ this.context = context;\n+ this.source = source;\n+ this.indexShard = mock(IndexShard.class);\n+ when(indexShard.shardId()).thenReturn(new ShardId(\"index\", \"index\", 1));\n+ }\n+\n+ @Override\n+ public boolean sourceRequested() {\n+ return context != null && context.fetchSource();\n+ }\n+\n+ @Override\n+ public FetchSourceContext fetchSourceContext() {\n+ return context;\n+ }\n+\n+ @Override\n+ public SearchLookup lookup() {\n+ SearchLookup lookup = super.lookup();\n+ lookup.source().setSource(source);\n+ return lookup;\n+ }\n+\n+ @Override\n+ public IndexShard indexShard() {\n+ return indexShard;\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhaseTests.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n package org.elasticsearch.search.fetch.subphase.highlight;\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n-\n import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n@@ -50,8 +49,8 @@\n import org.hamcrest.Matchers;\n \n import java.io.IOException;\n-import java.util.Arrays;\n import java.util.Collection;\n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.Map;\n \n@@ -96,7 +95,7 @@ public class HighlighterSearchIT extends ESIntegTestCase {\n \n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n- return Arrays.asList(InternalSettingsPlugin.class);\n+ return Collections.singletonList(InternalSettingsPlugin.class);\n }\n \n public void testHighlightingWithWildcardName() throws IOException {", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java", "status": "modified" }, { "diff": "@@ -80,7 +80,7 @@ public class SearchFieldsIT extends ESIntegTestCase {\n \n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n- return Arrays.asList(CustomScriptPlugin.class);\n+ return Collections.singletonList(CustomScriptPlugin.class);\n }\n \n public static class CustomScriptPlugin extends MockScriptPlugin {", "filename": "core/src/test/java/org/elasticsearch/search/fields/SearchFieldsIT.java", "status": "modified" } ] }
{ "body": "Fix NPE during search with source filtering if the source is disabled.\nInstead of throwing an NPE, search response hits with source filtering will not contain the source if it is disabled in the mapping.\n\nTests pass: `gradle test` `gradle core:integTest`\n\nCloses #7758\n", "comments": [ { "body": "I mean the code LGTM but I wonder if we can start adding a unittest instead of an integration test for this? \n", "created_at": "2016-08-22T07:35:58Z" }, { "body": "@s1monw added unit tests. reverted integration test for disabled source.\nThe unit tests cover the disabled source scenario as well as the existing integration tests. Should we remove some of the other integration tests? Also I didn't get into randomized testing, complex doc sources, etc... I think these should be covered by [XContentMapValuesTests](https://github.com/elastic/elasticsearch/blob/master/core/src/test/java/org/elasticsearch/common/xcontent/support/XContentMapValuesTests.java) (tests the actual filter logic).\n", "created_at": "2016-08-26T19:03:46Z" }, { "body": "Looks good to me. I left a few style comments if you are interested in them, otherwise I can test locally and merge.\n", "created_at": "2016-08-26T19:05:21Z" }, { "body": "test this please\n", "created_at": "2016-08-26T19:28:42Z" }, { "body": "Awesome. @s1monw's kicked off a CI build for this.\n", "created_at": "2016-08-26T19:30:05Z" }, { "body": "@elasticmachine, test this please.\n", "created_at": "2016-08-26T19:32:56Z" }, { "body": "@qwerty4030, tests passed last night so I've merged. Thanks for fixing this!\n", "created_at": "2016-08-27T11:25:18Z" } ], "number": 20093, "title": "Fix NPE during search with source filtering if the source is disabled." }
{ "body": "With #20093 we fixed a NPE thrown when using _source include/exclude and source is disabled in the mappings. Fixing meant ignoring the _source parameter in the request as no fields can be extracted from it.\n\nWe should rather throw a clear exception to point out that we cannot extract fields from _source. Note that this happens only when explicitly trying to extract fields from source. When source is disabled and no _source parameter is specified, no errors will be thrown and no source will be returned.\n\nCloses #20408\n", "number": 20424, "review_comments": [], "title": "Throw error when trying to fetch fields from source and source is disabled" }
{ "commits": [ { "message": "With #20093 we fixed a NPE thrown when using _source include/exclude and source is disabled in the mappings. Fixing meant ignoring the _source parameter in the request as no fields can be extracted from it.\n\nWe should rather throw a clear exception to clearly point out that we cannot extract fields from _source. Note that this happens only when explicitly trying to extract fields from source. When source is disabled and no _source parameter is specified, no errors will be thrown and no source will be returned.\n\nCloses #20408\nRelates to #20093" } ], "files": [ { "diff": "@@ -36,16 +36,18 @@ public void hitExecute(SearchContext context, HitContext hitContext) {\n return;\n }\n SourceLookup source = context.lookup().source();\n- if (source.internalSourceRef() == null) {\n- return; // source disabled in the mapping\n- }\n FetchSourceContext fetchSourceContext = context.fetchSourceContext();\n assert fetchSourceContext.fetchSource();\n if (fetchSourceContext.includes().length == 0 && fetchSourceContext.excludes().length == 0) {\n hitContext.hit().sourceRef(source.internalSourceRef());\n return;\n }\n \n+ if (source.internalSourceRef() == null) {\n+ throw new IllegalArgumentException(\"unable to fetch fields from _source field: _source is disabled in the mappings \" +\n+ \"for index [\" + context.indexShard().shardId().getIndexName() + \"]\");\n+ }\n+\n Object value = source.filter(fetchSourceContext.includes(), fetchSourceContext.excludes());\n try {\n final int initialCapacity = Math.min(1024, source.internalSourceRef().length());", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java", "status": "modified" }, { "diff": "@@ -23,6 +23,8 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.search.fetch.FetchSubPhase;\n import org.elasticsearch.search.internal.InternalSearchHit;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -33,36 +35,10 @@\n import java.io.IOException;\n import java.util.Collections;\n \n-public class FetchSourceSubPhaseTests extends ESTestCase {\n-\n- static class FetchSourceSubPhaseTestSearchContext extends TestSearchContext {\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n \n- FetchSourceContext context;\n- BytesReference source;\n-\n- FetchSourceSubPhaseTestSearchContext(FetchSourceContext context, BytesReference source) {\n- super(null);\n- this.context = context;\n- this.source = source;\n- }\n-\n- @Override\n- public boolean sourceRequested() {\n- return context != null && context.fetchSource();\n- }\n-\n- @Override\n- public FetchSourceContext fetchSourceContext() {\n- return context;\n- }\n-\n- @Override\n- public SearchLookup lookup() {\n- SearchLookup lookup = super.lookup();\n- lookup.source().setSource(source);\n- return lookup;\n- }\n- }\n+public class FetchSourceSubPhaseTests extends ESTestCase {\n \n public void testFetchSource() throws IOException {\n XContentBuilder source = XContentFactory.jsonBuilder().startObject()\n@@ -109,11 +85,14 @@ public void testSourceDisabled() throws IOException {\n hitContext = hitExecute(null, false, null, null);\n assertNull(hitContext.hit().sourceAsMap());\n \n- hitContext = hitExecute(null, true, \"field1\", null);\n- assertNull(hitContext.hit().sourceAsMap());\n+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> hitExecute(null, true, \"field1\", null));\n+ assertEquals(\"unable to fetch fields from _source field: _source is disabled in the mappings \" +\n+ \"for index [index]\", exception.getMessage());\n \n- hitContext = hitExecuteMultiple(null, true, new String[]{\"*\"}, new String[]{\"field2\"});\n- assertNull(hitContext.hit().sourceAsMap());\n+ exception = expectThrows(IllegalArgumentException.class,\n+ () -> hitExecuteMultiple(null, true, new String[]{\"*\"}, new String[]{\"field2\"}));\n+ assertEquals(\"unable to fetch fields from _source field: _source is disabled in the mappings \" +\n+ \"for index [index]\", exception.getMessage());\n }\n \n private FetchSubPhase.HitContext hitExecute(XContentBuilder source, boolean fetchSource, String include, String exclude) {\n@@ -131,4 +110,40 @@ private FetchSubPhase.HitContext hitExecuteMultiple(XContentBuilder source, bool\n phase.hitExecute(searchContext, hitContext);\n return hitContext;\n }\n+\n+ private static class FetchSourceSubPhaseTestSearchContext extends TestSearchContext {\n+ final FetchSourceContext context;\n+ final BytesReference source;\n+ final IndexShard indexShard;\n+\n+ FetchSourceSubPhaseTestSearchContext(FetchSourceContext context, BytesReference source) {\n+ super(null);\n+ this.context = context;\n+ this.source = source;\n+ this.indexShard = mock(IndexShard.class);\n+ when(indexShard.shardId()).thenReturn(new ShardId(\"index\", \"index\", 1));\n+ }\n+\n+ @Override\n+ public boolean sourceRequested() {\n+ return context != null && context.fetchSource();\n+ }\n+\n+ @Override\n+ public FetchSourceContext fetchSourceContext() {\n+ return context;\n+ }\n+\n+ @Override\n+ public SearchLookup lookup() {\n+ SearchLookup lookup = super.lookup();\n+ lookup.source().setSource(source);\n+ return lookup;\n+ }\n+\n+ @Override\n+ public IndexShard indexShard() {\n+ return indexShard;\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhaseTests.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n package org.elasticsearch.search.fetch.subphase.highlight;\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n-\n import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n@@ -50,8 +49,8 @@\n import org.hamcrest.Matchers;\n \n import java.io.IOException;\n-import java.util.Arrays;\n import java.util.Collection;\n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.Map;\n \n@@ -96,7 +95,7 @@ public class HighlighterSearchIT extends ESIntegTestCase {\n \n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n- return Arrays.asList(InternalSettingsPlugin.class);\n+ return Collections.singletonList(InternalSettingsPlugin.class);\n }\n \n public void testHighlightingWithWildcardName() throws IOException {", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java", "status": "modified" }, { "diff": "@@ -80,7 +80,7 @@ public class SearchFieldsIT extends ESIntegTestCase {\n \n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n- return Arrays.asList(CustomScriptPlugin.class);\n+ return Collections.singletonList(CustomScriptPlugin.class);\n }\n \n public static class CustomScriptPlugin extends MockScriptPlugin {", "filename": "core/src/test/java/org/elasticsearch/search/fields/SearchFieldsIT.java", "status": "modified" } ] }
{ "body": "We skip GeoPointInBBoxQuery already but not when it was rewritten\nit appears as GeoPointMultiTermQuery and needs to be skipped as well.\n\nsee #17537\n\nThis pull request is against the 2.4 branch and a follow up to https://github.com/elastic/elasticsearch/issues/17537#issuecomment-246112229 . Unfortunately my assumption that the highlighter never gets to see a rewritten query (see https://github.com/elastic/elasticsearch/pull/18495#issue-156019922) was wrong and so in some cases such as the example provided by @ajayar it does. We need to check for that as well and skip the highlighting in this case. \n\nThe tests in this pull request do not reproduce the issue #17537 on the 2.4 branch for me and I believe this is due to an unrelated change in Lucene. I still think we should merge this to 2.4 anyway to be on the safe side. On 2.3 the tests reproduce the issue. In addition the discussion on #17537 is not finished so this pull request doe not close the issue yet.\n\nOn master the problem was fixed in Lucene (see https://issues.apache.org/jira/browse/LUCENE-7293) so we can remove the check in CustomQueryScorer completely. I will open a separate pull request for that.\n", "comments": [ { "body": "@brwe I opened a similar PR to fix a problem with prefix queries embedded in a function query. I think it should fix the problem you are seeing since it avoids the rewrite of the function query (and it's inner query). Can you take a look ?\nhttps://github.com/elastic/elasticsearch/pull/20400\n", "created_at": "2016-09-12T12:50:25Z" }, { "body": "@jimferenczi I am not sure if #20400 fixes this, it would still try to rewrite that inner geo query though?\n", "created_at": "2016-09-12T13:08:41Z" }, { "body": "LGTM \n", "created_at": "2016-09-12T13:09:32Z" }, { "body": "> @jimferenczi I am not sure if #20400 fixes this, it would still try to rewrite that inner geo query though?\n\n@s1monw true, but we could call `extract` instead of `super.extract` and get the expected behavior. I was just thinking about how we could avoid the class name check that this PR adds. Since this PR is for 2.x only (I've missed that point) I guess it's ok so LGTM too, in the mean time can I get a review for #20400 ? ;)\n", "created_at": "2016-09-12T13:17:23Z" }, { "body": "> I guess it's ok so LGTM too, in the mean time can I get a review for #20400 ?\n\nI did that before I commented here :)\n", "created_at": "2016-09-12T13:19:28Z" }, { "body": "@jimferenczi It seems like for this query this fixes it. But I still think it is better to check for the particular geo query when we highlight. Functionscore is just one of many and I think we should rather catch this problem for all cases even though the class check is rather ugly.\n", "created_at": "2016-09-12T13:22:34Z" }, { "body": "> I did that before I commented here :)\n\nhe he, thanks !\n\n> Functionscore is just one of many and I think we should rather catch this problem for all cases even though the class check is rather ugly.\n\nNot so many but I get your point. Maybe we should avoid rewrite completely ? I don't see where it could be beneficial for the highlighting. I may be completely wrong but is there examples where the rewrite solves an issue with highlighting. It seems to cause more harm than good. \n", "created_at": "2016-09-12T13:29:00Z" }, { "body": "> Maybe we should avoid rewrite completely ?\n\nHm, that will be tricky as queries are rewritten in org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract() so that is rather a Lucene thing? I do not know the code well enough though to know if that can be easily avoided. \n", "created_at": "2016-09-12T13:36:16Z" }, { "body": "@s1monw added a comment. Thanks for the review!\n", "created_at": "2016-09-12T13:36:53Z" } ], "number": 20412, "title": "skip GeoPointMultiTermQuery when highlighting" }
{ "body": "This has been fixed in Lucene\nhttps://issues.apache.org/jira/browse/LUCENE-7293\nThis commit also adds the tests from #20412\n", "number": 20418, "review_comments": [], "title": "remove workaround for highlighter bug with geo queries" }
{ "commits": [ { "message": "remove workaround for highlighter bug with geo queries\n\nThis has been fixed in Lucene\nhttps://issues.apache.org/jira/browse/LUCENE-7293\nThis commit also adds the tests from #20412" } ], "files": [ { "diff": "@@ -90,11 +90,7 @@ protected void extractUnknownQuery(Query query,\n }\n \n protected void extract(Query query, float boost, Map<String, WeightedSpanTerm> terms) throws IOException {\n- if (query instanceof GeoPointInBBoxQuery) {\n- // skip all geo queries, see https://issues.apache.org/jira/browse/LUCENE-7293 and\n- // https://github.com/elastic/elasticsearch/issues/17537\n- return;\n- } else if (query instanceof HasChildQueryBuilder.LateParsingQuery) {\n+ if (query instanceof HasChildQueryBuilder.LateParsingQuery) {\n // skip has_child or has_parent queries, see: https://github.com/elastic/elasticsearch/issues/14999\n return;\n }", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/CustomQueryScorer.java", "status": "modified" }, { "diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.support.WriteRequest;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.Settings.Builder;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -2756,6 +2757,45 @@ public void testGeoFieldHighlightingWithDifferentHighlighters() throws IOExcepti\n assertThat(search.getHits().getAt(0).highlightFields().get(\"text\").fragments().length, equalTo(1));\n }\n \n+ public void testGeoFieldHighlightingWhenQueryGetsRewritten() throws IOException {\n+ // same as above but in this example the query gets rewritten during highlighting\n+ // see https://github.com/elastic/elasticsearch/issues/17537#issuecomment-244939633\n+ XContentBuilder mappings = jsonBuilder();\n+ mappings.startObject();\n+ mappings.startObject(\"jobs\")\n+ .startObject(\"_all\")\n+ .field(\"enabled\", false)\n+ .endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"loc\")\n+ .field(\"type\", \"geo_point\")\n+ .endObject()\n+ .startObject(\"jd\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ mappings.endObject();\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"jobs\", mappings));\n+ ensureYellow();\n+\n+ client().prepareIndex(\"test\", \"jobs\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"jd\", \"some आवश्यकता है- आर्य समाज अनाथालय, 68 सिविल लाइन्स, बरेली को एक पुरूष\" +\n+ \" रस text\")\n+ .field(\"loc\", \"12.934059,77.610741\").endObject())\n+ .get();\n+ refresh();\n+\n+ QueryBuilder query = QueryBuilders.functionScoreQuery(QueryBuilders.boolQuery().filter(QueryBuilders.geoBoundingBoxQuery(\"loc\")\n+ .setCorners(new GeoPoint(48.934059, 41.610741), new GeoPoint(-23.065941, 113.610741))));\n+ SearchResponse search = client().prepareSearch().setSource(\n+ new SearchSourceBuilder().query(query).highlighter(new HighlightBuilder().highlighterType(\"plain\").field(\"jd\"))).get();\n+ assertNoFailures(search);\n+ assertThat(search.getHits().totalHits(), equalTo(1L));\n+ }\n+\n+\n public void testKeywordFieldHighlighting() throws IOException {\n // check that keyword highlighting works\n XContentBuilder mappings = jsonBuilder();", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java", "status": "modified" }, { "diff": "@@ -68,8 +68,11 @@ public void checkGeoQueryHighlighting(Query geoQuery) throws IOException, Invali\n String fragment = highlighter.getBestFragment(fieldNameAnalyzer.tokenStream(\"text\", \"Arbitrary text field which should not cause \" +\n \"a failure\"), \"Arbitrary text field which should not cause a failure\");\n assertThat(fragment, equalTo(\"Arbitrary text field which should not cause a <B>failure</B>\"));\n- // TODO: This test will fail if we pass in an instance of GeoPointInBBoxQueryImpl too. Should we also find a way to work around that\n- // or can the query not be rewritten before it is passed into the highlighter?\n+ Query rewritten = boolQuery.rewrite(null);\n+ highlighter = new org.apache.lucene.search.highlight.Highlighter(new CustomQueryScorer(rewritten));\n+ fragment = highlighter.getBestFragment(fieldNameAnalyzer.tokenStream(\"text\", \"Arbitrary text field which should not cause \" +\n+ \"a failure\"), \"Arbitrary text field which should not cause a failure\");\n+ assertThat(fragment, equalTo(\"Arbitrary text field which should not cause a <B>failure</B>\"));\n }\n \n public void testGeoPointInBBoxQueryHighlighting() throws IOException, InvalidTokenOffsetsException {", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/PlainHighlighterTests.java", "status": "modified" } ] }
{ "body": "Hi guys,\n\nwe have upgraded ElasticSearch from 2.3.0 and reindexed our geolocations so the latitude and longitude are stored separately. We have noticed that some of our visualisation started to fail after we add a filter based on geolocation rectangle. However, map visualisation are working just fine. The problem occurs when we include actual documents. In this case, we get some failed shards (usually 1 out of 5) and error: Invalid shift value (xx) in prefixCoded bytes (is encoded value really a geo point?).\n\nDetails:\nOur geolocation index is based on:\n\n```\n\"dynamic_templates\": [{\n....\n{\n \"ner_geo\": {\n \"mapping\": {\n \"type\": \"geo_point\",\n \"lat_lon\": true\n },\n \"path_match\": \"*.coordinates\"\n }\n }],\n```\n\nThe ok query with the error is as follows. If we change the query size to 0 (map visualizations example), the query completes without problem.\n\n```\n{\n \"size\": 100,\n \"aggs\": {\n \"2\": {\n \"geohash_grid\": {\n \"field\": \"authors.affiliation.coordinates\",\n \"precision\": 2\n }\n }\n },\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"query_string\": {\n \"analyze_wildcard\": true,\n \"query\": \"*\"\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"geo_bounding_box\": {\n \"authors.affiliation.coordinates\": {\n \"top_left\": {\n \"lat\": 61.10078883158897,\n \"lon\": -170.15625\n },\n \"bottom_right\": {\n \"lat\": -64.92354174306496,\n \"lon\": 118.47656249999999\n }\n }\n }\n }\n ],\n \"must_not\": []\n }\n }\n }\n },\n \"highlight\": {\n \"pre_tags\": [\n \"@kibana-highlighted-field@\"\n ],\n \"post_tags\": [\n \"@/kibana-highlighted-field@\"\n ],\n \"fields\": {\n \"*\": {}\n },\n \"require_field_match\": false,\n \"fragment_size\": 2147483647\n }\n}\n```\n\nElasticsearch version**: 2.3.0\nOS version**: Elasticsearch docker image with head plugin, marvel and big desk installed\n\nThank you for your help,\nregards,\nJakub Smid\n", "comments": [ { "body": "@jaksmid could you provide some documents and the stack trace that is produced when you see this exception please?\n", "created_at": "2016-04-06T11:08:13Z" }, { "body": "@jpountz given that this only happens with `size` > 0, I'm wondering if this highlighting trying to highlight the geo field? Perhaps with no documents on a particular shard?\n\n/cc @nknize \n", "created_at": "2016-04-06T11:09:22Z" }, { "body": "I can reproduce something that looks just like this with a lucene test if you apply the patch on https://issues.apache.org/jira/browse/LUCENE-7185\n\nI suspect it may happen with extreme values such as latitude = 90 or longitude = 180 which are used much more in tests with the patch. See seed:\n\n```\n [junit4] Suite: org.apache.lucene.spatial.geopoint.search.TestGeoPointQuery\n [junit4] IGNOR/A 0.01s J1 | TestGeoPointQuery.testRandomBig\n [junit4] > Assumption #1: 'nightly' test group is disabled (@Nightly())\n [junit4] IGNOR/A 0.00s J1 | TestGeoPointQuery.testRandomDistanceHuge\n [junit4] > Assumption #1: 'nightly' test group is disabled (@Nightly())\n [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestGeoPointQuery -Dtests.method=testAllLonEqual -Dtests.seed=4ABB96AB44F4796E -Dtests.locale=id-ID -Dtests.timezone=Pacific/Fakaofo -Dtests.asserts=true -Dtests.file.encoding=US-ASCII\n [junit4] ERROR 0.35s J1 | TestGeoPointQuery.testAllLonEqual <<<\n [junit4] > Throwable #1: java.lang.IllegalArgumentException: Illegal shift value, must be 32..63; got shift=0\n [junit4] > at __randomizedtesting.SeedInfo.seed([4ABB96AB44F4796E:DBB16756B45E397A]:0)\n [junit4] > at org.apache.lucene.spatial.util.GeoEncodingUtils.geoCodedToPrefixCodedBytes(GeoEncodingUtils.java:109)\n [junit4] > at org.apache.lucene.spatial.util.GeoEncodingUtils.geoCodedToPrefixCoded(GeoEncodingUtils.java:89)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum$Range.fillBytesRef(GeoPointPrefixTermsEnum.java:236)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointTermsEnum.nextRange(GeoPointTermsEnum.java:71)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum.nextRange(GeoPointPrefixTermsEnum.java:171)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum.nextSeekTerm(GeoPointPrefixTermsEnum.java:190)\n [junit4] > at org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:212)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointTermQueryConstantScoreWrapper$1.scorer(GeoPointTermQueryConstantScoreWrapper.java:110)\n [junit4] > at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)\n [junit4] > at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:644)\n [junit4] > at org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)\n [junit4] > at org.apache.lucene.search.BooleanWeight.optionalBulkScorer(BooleanWeight.java:231)\n [junit4] > at org.apache.lucene.search.BooleanWeight.booleanScorer(BooleanWeight.java:297)\n [junit4] > at org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:364)\n [junit4] > at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:644)\n [junit4] > at org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)\n [junit4] > at org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)\n [junit4] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)\n [junit4] > at org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)\n [junit4] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)\n [junit4] > at org.apache.lucene.spatial.util.BaseGeoPointTestCase.verifyRandomRectangles(BaseGeoPointTestCase.java:835)\n [junit4] > at org.apache.lucene.spatial.util.BaseGeoPointTestCase.verify(BaseGeoPointTestCase.java:763)\n [junit4] > at org.apache.lucene.spatial.util.BaseGeoPointTestCase.testAllLonEqual(BaseGeoPointTestCase.java:495)\n\n```\n", "created_at": "2016-04-07T07:17:50Z" }, { "body": "Hi @clintongormley, thank you for your message. \n\nThe stack trace is as follows:\n`RemoteTransportException[[elasticsearch_4][172.17.0.2:9300][indices:data/read/search[phase/fetch/id]]]; nested: FetchPhaseExecutionException[Fetch Failed [Failed to highlight field [cyberdyne_metadata.ner.mitie.model.DISEASE.tag]]]; nested: NumberFormatException[Invalid shift value (65) in prefixCoded bytes (is encoded value really a geo point?)];\nCaused by: FetchPhaseExecutionException[Fetch Failed [Failed to highlight field [cyberdyne_metadata.ner.mitie.model.DISEASE.tag]]]; nested: NumberFormatException[Invalid shift value (65) in prefixCoded bytes (is encoded value really a geo point?)];\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:123)\n at org.elasticsearch.search.highlight.HighlightPhase.hitExecute(HighlightPhase.java:126)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:188)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:592)\n at org.elasticsearch.search.action.SearchServiceTransportAction$FetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:408)\n at org.elasticsearch.search.action.SearchServiceTransportAction$FetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:405)\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:300)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.NumberFormatException: Invalid shift value (65) in prefixCoded bytes (is encoded value really a geo point?)\n at org.apache.lucene.spatial.util.GeoEncodingUtils.getPrefixCodedShift(GeoEncodingUtils.java:134)\n at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum.accept(GeoPointPrefixTermsEnum.java:219)\n at org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:232)\n at org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:67)\n at org.apache.lucene.search.ScoringRewrite.rewrite(ScoringRewrite.java:108)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:220)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:227)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:505)\n at org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:218)\n at org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:186)\n at org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:195)\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:108)\n ... 12 more`\n\nThe field cyberdyne_metadata.ner.mitie.model.DISEASE.tag should not be a geopoint according to the dynamic template.\n", "created_at": "2016-04-07T07:20:27Z" }, { "body": "@rmuir oh, good catch\n@clintongormley The stack trace indeed suggests that the issue is with highlighting on the geo field. Regardless of this bug, I wonder that we should fail early when highlighting on anything but text fields and/or exclude non-text fields from wildcard matching.\n", "created_at": "2016-04-07T07:33:06Z" }, { "body": "> I wonder that we should fail early when highlighting on anything but text fields and/or exclude non-text fields from wildcard matching.\n\n+1 to fail early if the user explicitly defined a non text field to highlight on and exclude non text fields when using wildcards\n", "created_at": "2016-04-07T08:40:43Z" }, { "body": "I was running into this bug during a live demo... Yes I know, I've should have tested all demo scenario's after updating ES :grimacing: . Anyway, +1 for fixing this!\n", "created_at": "2016-04-17T08:09:07Z" }, { "body": "-I´m having the same error. It's happends with doc having location and trying to use \n\"highlight\": {... \"require_field_match\": false ...}\n\nthanks!\n", "created_at": "2016-04-18T21:45:45Z" }, { "body": "I'm unclear as to what exactly is going on here, but I'm running into the same issue. I'm attempting to do a geo bounding box in Kibana while viewing the results in the Discover tab. Disabling highlighting in Kibana fixes the issue, but I would actually like to keep highlighting enabled, since it's super useful otherwise.\n\nIt sounds from what others are saying that this should fail when querying on _any_ non-string field, but I am not getting the same failure on numeric fields. Is it just an issue with geoip fields? I suppose another nice thing would be to explicitly allow for configuration of which fields should be highlighted in Kibana.\n", "created_at": "2016-05-03T01:52:24Z" }, { "body": "Please fix this issue.\n", "created_at": "2016-05-03T10:40:19Z" }, { "body": "I wrote two tests so that everyone can reproduce what happens easily: https://github.com/brwe/elasticsearch/commit/ffa242941e4ede34df67301f7b9d46ea8719cc22\n\nIn brief:\nThe plain highlighter tries to highlight whatever the BBQuery provides as terms in the text \"60,120\" if that is how the `geo_point` was indexed (if the point was indexed with `{\"lat\": 60, \"lon\": 120}` nothing will happen because we cannot even extract anything from the source). The terms in the text are provided to Lucene as a token steam with a keyword analyzer.\nIn Lucene, this token stream is converted this via a longish call stack into a terms enum. But this terms enum is pulled from the query that contains the terms that are to be highlighted. In this case we call `GeoPointMultiTermQuery.getTermsEnum(terms)` which wraps the term in a `GeoPointTermsEnum`. This enum tries to convert a prefix coded geo term back to something else but because it is really just the string \"60,120\" it throws the exception we see. \n\nI am unsure yet how a correct fix would look like but do wonder why we try highlingting on numeric and geo fields at all? If anyone has an opinion let me know.\n", "created_at": "2016-05-04T17:50:50Z" }, { "body": "I missed @jpountz comment:\n\n> Regardless of this bug, I wonder that we should fail early when highlighting on anything but text fields and/or exclude non-text fields from wildcard matching.\n\nI agree. Will make a pr for that.\n", "created_at": "2016-05-04T17:57:01Z" }, { "body": "@brwe you did something similar before: https://github.com/elastic/elasticsearch/pull/11364 - i would have thought that that PR should have fixed this issue?\n", "created_at": "2016-05-05T08:17:58Z" }, { "body": "@clintongormley Yes you are right. #11364 only addresses problems one gets when the way text is indexed is not compatible with the highlighter used. I do not remember why I did not exclude numeric fields then. \n", "created_at": "2016-05-05T09:15:10Z" }, { "body": "Great work. Tnx \n\n:sunglasses: \n", "created_at": "2016-05-07T13:15:25Z" }, { "body": "This is not fixed in 2.3.3 yet, correct?\n", "created_at": "2016-05-19T07:09:10Z" }, { "body": "@rodgermoore It should be fixed in 2.3.3, can you still reproduce the problem?\n", "created_at": "2016-05-19T07:13:30Z" }, { "body": "Ubuntu 14.04-04\nElasticsearch 2.3.3\nKibana 4.5.1\nJVM 1.8.0_66\n\nI am still able to reproduce this error in Kibana 4.5.1. I have a dashboard with a search panel with highlighting enabled. On the same Dashboard I have a tile map and after selecting an area in this map using the select function (draw a rectangle) I got the \"Invalid shift value (xx) in prefixCoded bytes (is encoded value really a geo point?)\" error.\n\nWhen I alter the json settings file of the search panel and remove highlighting the error does not pop-up.\n", "created_at": "2016-05-19T11:44:32Z" }, { "body": "@rodgermoore I cannot reproduce this but I might do something different from you. Here is my dashboard:\n\n![image](https://cloud.githubusercontent.com/assets/4320215/15393472/bd2b4cf2-1dcd-11e6-8ac1-cf6ba5e995b7.png)\n\nIs that what you did?\nCan you attach the whole stacktrace from the elasticsearch logs again? If you did not change the logging config the full search request should be in there. Also, if you can please add an example document.\n", "created_at": "2016-05-19T13:07:51Z" }, { "body": "I see you used \"text:blah\". I did not enter a search at all (so used the default wildcard) and then did the aggregation on the tile map. This resulted in the error. \n", "created_at": "2016-05-19T13:12:50Z" }, { "body": "I can remove the query and still get a result. Can you please attach the relevant part of the elasticsearch log? \n", "created_at": "2016-05-19T13:16:46Z" }, { "body": "Here you go:\n\n```\n[2016-05-19 15:23:08,270][DEBUG][action.search ] [Black King] All shards failed for phase: [query_fetch]\nRemoteTransportException[[Black King][192.168.48.18:9300][indices:data/read/search[phase/query+fetch]]]; nested: FetchPhaseExecutionException[Fetch Failed [Failed to highlight field [tags.nl]]]; nested: NumberFormatException[Invalid shift value (115) in prefixCoded bytes (is encoded value really a geo point?)];\nCaused by: FetchPhaseExecutionException[Fetch Failed [Failed to highlight field [tags.nl]]]; nested: NumberFormatException[Invalid shift value (115) in prefixCoded bytes (is encoded value really a geo point?)];\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:123)\n at org.elasticsearch.search.highlight.HighlightPhase.hitExecute(HighlightPhase.java:140)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:188)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:480)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:392)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:389)\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.NumberFormatException: Invalid shift value (115) in prefixCoded bytes (is encoded value really a geo point?)\n at org.apache.lucene.spatial.util.GeoEncodingUtils.getPrefixCodedShift(GeoEncodingUtils.java:134)\n at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum.accept(GeoPointPrefixTermsEnum.java:219)\n at org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:232)\n at org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:67)\n at org.apache.lucene.search.ScoringRewrite.rewrite(ScoringRewrite.java:108)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:220)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:227)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:505)\n at org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:218)\n at org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:186)\n at org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:195)\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:108)\n ... 12 more\n```\n\nWe are using dynamic mapping and we dynamically analyse all string fields using the Dutch language analyzer. All string fields get a non analyzed field: \"field.raw\" and a Dutch analyzed field \"field.nl\". \n", "created_at": "2016-05-19T13:37:15Z" }, { "body": "Ah...I was hoping to get the actual request but it is not in the stacktrace after all. Can you also add the individual requests from the panels in your dashboard (in the spy tab) and a screenshot so I can see what the geo bounding box filter filters on? I could then try to reconstruct the request.\n\nAlso, are you sure you upgraded all nodes in the cluster? Check with `curl -XGET \"http://hostname:port/_nodes\"`. Would be great if you could add the output of that here too just to be sure. \n", "created_at": "2016-05-19T13:46:59Z" }, { "body": "I have got the exact same issue. I am running 2.3.3. All my nodes (1) are upgraded.\n", "created_at": "2016-05-19T14:17:36Z" }, { "body": "<img width=\"1676\" alt=\"screen shot 2016-05-19 at 16 29 15\" src=\"https://cloud.githubusercontent.com/assets/78766/15397413/7858191c-1de0-11e6-802b-773f4a7ecf79.png\">\n", "created_at": "2016-05-19T14:42:02Z" }, { "body": "Here you go.\n\nTile Map Query:\n\n```\n{\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"query_string\": {\n \"analyze_wildcard\": true,\n \"query\": \"*\"\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"geo_bounding_box\": {\n \"SomeGeoField\": {\n \"top_left\": {\n \"lat\": REMOVED,\n \"lon\": REMOVED\n },\n \"bottom_right\": {\n \"lat\": REMOVED,\n \"lon\": REMOVED\n }\n }\n },\n \"$state\": {\n \"store\": \"appState\"\n }\n },\n {\n \"query\": {\n \"query_string\": {\n \"query\": \"*\",\n \"analyze_wildcard\": true\n }\n }\n },\n {\n \"range\": {\n \"@timestamp\": {\n \"gte\": 1458485686484,\n \"lte\": 1463666086484,\n \"format\": \"epoch_millis\"\n }\n }\n }\n ],\n \"must_not\": []\n }\n }\n }\n },\n \"size\": 0,\n \"aggs\": {\n \"2\": {\n \"geohash_grid\": {\n \"field\": \"SomeGeoField\",\n \"precision\": 5\n }\n }\n }\n}\n```\n\nI'm using a single node cluster, here's the info:\n\n```\n{\n \"cluster_name\": \"elasticsearch\",\n \"nodes\": {\n \"RtBthRfeSOSud1XfRRAkSA\": {\n \"name\": \"Black King\",\n \"transport_address\": \"192.168.48.18:9300\",\n \"host\": \"192.168.48.18\",\n \"ip\": \"192.168.48.18\",\n \"version\": \"2.3.3\",\n \"build\": \"218bdf1\",\n \"http_address\": \"192.168.48.18:9200\",\n \"settings\": {\n \"pidfile\": \"/var/run/elasticsearch/elasticsearch.pid\",\n \"cluster\": {\n \"name\": \"elasticsearch\"\n },\n \"path\": {\n \"conf\": \"/etc/elasticsearch\",\n \"data\": \"/var/lib/elasticsearch\",\n \"logs\": \"/var/log/elasticsearch\",\n \"home\": \"/usr/share/elasticsearch\",\n \"repo\": [\n \"/home/somename/es_backups\"\n ]\n },\n \"name\": \"Black King\",\n \"client\": {\n \"type\": \"node\"\n },\n \"foreground\": \"false\",\n \"config\": {\n \"ignore_system_properties\": \"true\"\n },\n \"network\": {\n \"host\": \"0.0.0.0\"\n }\n },\n \"os\": {\n \"refresh_interval_in_millis\": 1000,\n \"name\": \"Linux\",\n \"arch\": \"amd64\",\n \"version\": \"3.19.0-59-generic\",\n \"available_processors\": 8,\n \"allocated_processors\": 8\n },\n \"process\": {\n \"refresh_interval_in_millis\": 1000,\n \"id\": 1685,\n \"mlockall\": false\n },\n \"jvm\": {\n \"pid\": 1685,\n \"version\": \"1.8.0_66\",\n \"vm_name\": \"Java HotSpot(TM) 64-Bit Server VM\",\n \"vm_version\": \"25.66-b17\",\n \"vm_vendor\": \"Oracle Corporation\",\n \"start_time_in_millis\": 1463663018422,\n \"mem\": {\n \"heap_init_in_bytes\": 6442450944,\n \"heap_max_in_bytes\": 6372720640,\n \"non_heap_init_in_bytes\": 2555904,\n \"non_heap_max_in_bytes\": 0,\n \"direct_max_in_bytes\": 6372720640\n },\n \"gc_collectors\": [\n \"ParNew\",\n \"ConcurrentMarkSweep\"\n ],\n \"memory_pools\": [\n \"Code Cache\",\n \"Metaspace\",\n \"Compressed Class Space\",\n \"Par Eden Space\",\n \"Par Survivor Space\",\n \"CMS Old Gen\"\n ],\n \"using_compressed_ordinary_object_pointers\": \"true\"\n },\n \"thread_pool\": {\n \"force_merge\": {\n \"type\": \"fixed\",\n \"min\": 1,\n \"max\": 1,\n \"queue_size\": -1\n },\n \"percolate\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 1000\n },\n \"fetch_shard_started\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 16,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"listener\": {\n \"type\": \"fixed\",\n \"min\": 4,\n \"max\": 4,\n \"queue_size\": -1\n },\n \"index\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 200\n },\n \"refresh\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 4,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"suggest\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 1000\n },\n \"generic\": {\n \"type\": \"cached\",\n \"keep_alive\": \"30s\",\n \"queue_size\": -1\n },\n \"warmer\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 4,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"search\": {\n \"type\": \"fixed\",\n \"min\": 13,\n \"max\": 13,\n \"queue_size\": 1000\n },\n \"flush\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 4,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"fetch_shard_store\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 16,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"management\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 5,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"get\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 1000\n },\n \"bulk\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 50\n },\n \"snapshot\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 4,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n }\n },\n \"transport\": {\n \"bound_address\": [\n \"[::]:9300\"\n ],\n \"publish_address\": \"192.168.48.18:9300\",\n \"profiles\": {}\n },\n \"http\": {\n \"bound_address\": [\n \"[::]:9200\"\n ],\n \"publish_address\": \"192.168.48.18:9200\",\n \"max_content_length_in_bytes\": 104857600\n },\n \"plugins\": [],\n \"modules\": [\n {\n \"name\": \"lang-expression\",\n \"version\": \"2.3.3\",\n \"description\": \"Lucene expressions integration for Elasticsearch\",\n \"jvm\": true,\n \"classname\": \"org.elasticsearch.script.expression.ExpressionPlugin\",\n \"isolated\": true,\n \"site\": false\n },\n {\n \"name\": \"lang-groovy\",\n \"version\": \"2.3.3\",\n \"description\": \"Groovy scripting integration for Elasticsearch\",\n \"jvm\": true,\n \"classname\": \"org.elasticsearch.script.groovy.GroovyPlugin\",\n \"isolated\": true,\n \"site\": false\n },\n {\n \"name\": \"reindex\",\n \"version\": \"2.3.3\",\n \"description\": \"_reindex and _update_by_query APIs\",\n \"jvm\": true,\n \"classname\": \"org.elasticsearch.index.reindex.ReindexPlugin\",\n \"isolated\": true,\n \"site\": false\n }\n ]\n }\n }\n}\n```\n\nScreenshot, I had to clear out the data:\n\n![error_es](https://cloud.githubusercontent.com/assets/12231719/15397399/668a374c-1de0-11e6-903d-f929a2d9f0b2.PNG)\n", "created_at": "2016-05-19T14:42:12Z" }, { "body": "@rodgermoore does the query you provided work correctly? You said that it started working once you deleted the highlighting and this query doesn't contain highlighting. Could you provide the query that doesn't work?\n", "created_at": "2016-05-19T14:45:25Z" }, { "body": "It does has highlighting enabled. This is the json for the search panel: \n\n```\n{\n \"index\": \"someindex\",\n \"query\": {\n \"query_string\": {\n \"query\": \"*\",\n \"analyze_wildcard\": true\n }\n },\n \"filter\": [],\n \"highlight\": {\n \"pre_tags\": [\n \"@kibana-highlighted-field@\"\n ],\n \"post_tags\": [\n \"@/kibana-highlighted-field@\"\n ],\n \"fields\": {\n \"*\": {}\n },\n \"require_field_match\": false,\n \"fragment_size\": 2147483647\n }\n}\n```\n\nI can't show the actual data so I selected to show only the timestamp field in the search panel in the screenshot...\n\nWhen I change the json of the search panel to:\n\n```\n{\n \"index\": \"someindex\",\n \"filter\": [],\n \"query\": {\n \"query_string\": {\n \"query\": \"*\",\n \"analyze_wildcard\": true\n }\n }\n}\n```\n\nThe error disappears.\n", "created_at": "2016-05-19T14:51:27Z" }, { "body": "If my understanding of the patch is correct, it shouldn't matter whether Kibana is including the highlighting field. Elasticsearch should only be trying to highlight string fields, even if a wildcard is being used.\n", "created_at": "2016-05-19T14:54:44Z" }, { "body": "Ok, I managed to reproduce it on 2.3.3. It happens with `\"geohash\": true` in the mapping. \n\nSteps are:\n\n```\nDELETE test\nPUT test \n{\n \"mappings\": {\n \"doc\": {\n \"properties\": {\n \"point\": {\n \"type\": \"geo_point\",\n \"geohash\": true\n }\n }\n }\n }\n}\n\nPUT test/doc/1\n{\n \"point\": \"60.12,100.34\"\n}\n\nPOST test/_search\n{\n \"query\": {\n \"geo_bounding_box\": {\n \"point\": {\n \"top_left\": {\n \"lat\": 61.10078883158897,\n \"lon\": -170.15625\n },\n \"bottom_right\": {\n \"lat\": -64.92354174306496,\n \"lon\": 118.47656249999999\n }\n }\n }\n },\n \"highlight\": {\n \"fields\": {\n \"*\": {}\n }\n }\n}\n```\n\nSorry, I did not think of that. I work on another fix.\n", "created_at": "2016-05-19T16:23:44Z" } ], "number": 17537, "title": "Invalid shift value (xx) in prefixCoded bytes (is encoded value really a geo point?)" }
{ "body": "We skip GeoPointInBBoxQuery already but not when it was rewritten\nit appears as GeoPointMultiTermQuery and needs to be skipped as well.\n\nsee #17537\n\nThis pull request is against the 2.4 branch and a follow up to https://github.com/elastic/elasticsearch/issues/17537#issuecomment-246112229 . Unfortunately my assumption that the highlighter never gets to see a rewritten query (see https://github.com/elastic/elasticsearch/pull/18495#issue-156019922) was wrong and so in some cases such as the example provided by @ajayar it does. We need to check for that as well and skip the highlighting in this case. \n\nThe tests in this pull request do not reproduce the issue #17537 on the 2.4 branch for me and I believe this is due to an unrelated change in Lucene. I still think we should merge this to 2.4 anyway to be on the safe side. On 2.3 the tests reproduce the issue. In addition the discussion on #17537 is not finished so this pull request doe not close the issue yet.\n\nOn master the problem was fixed in Lucene (see https://issues.apache.org/jira/browse/LUCENE-7293) so we can remove the check in CustomQueryScorer completely. I will open a separate pull request for that.\n", "number": 20412, "review_comments": [ { "body": "can you add a comment that this class in pkg private and that's why we are making this stunt?\n", "created_at": "2016-09-12T13:09:23Z" } ], "title": "skip GeoPointMultiTermQuery when highlighting" }
{ "commits": [ { "message": "skip GeoPointMultiTermQuery when highlighting\n\nWe skip GeoPointInBBoxQuery already but not when it was rewritten\nit appears as GeoPointInBBoxQuery and needs to be skipped as well.\n\nsee #17537" }, { "message": "add comment why we need this ugly code" }, { "message": "don't randomize highlighter types" } ], "files": [ { "diff": "@@ -27,14 +27,26 @@\n import org.apache.lucene.spatial.geopoint.search.GeoPointInBBoxQuery;\n import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n-import org.elasticsearch.index.query.HasChildQueryBuilder;\n import org.elasticsearch.index.query.HasChildQueryParser;\n \n import java.io.IOException;\n import java.util.Map;\n \n public final class CustomQueryScorer extends QueryScorer {\n \n+ private static final Class<?> unsupportedGeoQuery;\n+\n+ static {\n+ try {\n+ // in extract() we need to check for GeoPointMultiTermQuery and skip extraction for queries that inherit from it.\n+ // But GeoPointMultiTermQuerythat is package private in Lucene hence we cannot use an instanceof check. This is why\n+ // we use this rather ugly workaround to get a Class and later be able to compare with isAssignableFrom().\n+ unsupportedGeoQuery = Class.forName(\"org.apache.lucene.spatial.geopoint.search.GeoPointMultiTermQuery\");\n+ } catch (ClassNotFoundException e) {\n+ throw new AssertionError(e);\n+ }\n+ }\n+\n public CustomQueryScorer(Query query, IndexReader reader, String field,\n String defaultField) {\n super(query, reader, field, defaultField);\n@@ -91,7 +103,7 @@ protected void extractUnknownQuery(Query query,\n }\n \n protected void extract(Query query, float boost, Map<String, WeightedSpanTerm> terms) throws IOException {\n- if (query instanceof GeoPointInBBoxQuery) {\n+ if (query instanceof GeoPointInBBoxQuery || unsupportedGeoQuery.isAssignableFrom(query.getClass())) {\n // skip all geo queries, see https://issues.apache.org/jira/browse/LUCENE-7293 and\n // https://github.com/elastic/elasticsearch/issues/17537\n return;", "filename": "core/src/main/java/org/elasticsearch/search/highlight/CustomQueryScorer.java", "status": "modified" }, { "diff": "@@ -2735,6 +2735,45 @@ public void testGeoFieldHighlightingWithDifferentHighlighters() throws IOExcepti\n assertThat(search.getHits().getAt(0).highlightFields().get(\"text\").fragments().length, equalTo(1));\n }\n \n+ public void testGeoFieldHighlightingWhenQueryGetsRewritten() throws IOException {\n+ // same as above but in this example the query gets rewritten during highlighting\n+ // see https://github.com/elastic/elasticsearch/issues/17537#issuecomment-244939633\n+ XContentBuilder mappings = jsonBuilder();\n+ mappings.startObject();\n+ mappings.startObject(\"jobs\")\n+ .startObject(\"_all\")\n+ .field(\"enabled\", false)\n+ .endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"loc\")\n+ .field(\"type\", \"geo_point\")\n+ .endObject()\n+ .startObject(\"jd\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ mappings.endObject();\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"jobs\", mappings));\n+ ensureYellow();\n+\n+ client().prepareIndex(\"test\", \"jobs\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"jd\", \"some आवश्यकता है- आर्य समाज अनाथालय, 68 सिविल लाइन्स, बरेली को एक पुरूष रस text\")\n+ .field(\"loc\", \"12.934059,77.610741\").endObject())\n+ .get();\n+ refresh();\n+\n+ QueryBuilder query = QueryBuilders.functionScoreQuery(QueryBuilders.boolQuery().filter(QueryBuilders.geoBoundingBoxQuery(\"loc\")\n+ .bottomRight(-23.065941, 113.610741)\n+ .topLeft(48.934059, 41.610741)));\n+ SearchResponse search = client().prepareSearch().setSource(\n+ new SearchSourceBuilder().query(query)\n+ .highlight(new HighlightBuilder().highlighterType(\"plain\").field(\"jd\")).buildAsBytes()).get();\n+ assertNoFailures(search);\n+ assertThat(search.getHits().totalHits(), equalTo(1L));\n+ }\n+\n public void testACopyFieldWithNestedQuery() throws Exception {\n String mapping = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n .startObject(\"foo\")", "filename": "core/src/test/java/org/elasticsearch/search/highlight/HighlighterSearchIT.java", "status": "modified" }, { "diff": "@@ -67,8 +67,12 @@ public void checkGeoQueryHighlighting(Query geoQuery) throws IOException, Invali\n String fragment = highlighter.getBestFragment(fieldNameAnalyzer.tokenStream(\"text\", \"Arbitrary text field which should not cause \" +\n \"a failure\"), \"Arbitrary text field which should not cause a failure\");\n assertThat(fragment, equalTo(\"Arbitrary text field which should not cause a <B>failure</B>\"));\n- // TODO: This test will fail if we pass in an instance of GeoPointInBBoxQueryImpl too. Should we also find a way to work around that\n- // or can the query not be rewritten before it is passed into the highlighter?\n+ Query rewritten = boolQuery.rewrite(null);\n+ highlighter =\n+ new org.apache.lucene.search.highlight.Highlighter(new CustomQueryScorer(rewritten));\n+ fragment = highlighter.getBestFragment(fieldNameAnalyzer.tokenStream(\"text\", \"Arbitrary text field which should not cause \" +\n+ \"a failure\"), \"Arbitrary text field which should not cause a failure\");\n+ assertThat(fragment, equalTo(\"Arbitrary text field which should not cause a <B>failure</B>\"));\n }\n \n public void testGeoPointInBBoxQueryHighlighting() throws IOException, InvalidTokenOffsetsException {", "filename": "core/src/test/java/org/elasticsearch/search/highlight/PlainHighlighterTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.4.0\n\n**Plugins installed**: None\n\n**JVM version**: 8\n\n**OS version**: MacOSX, Linux\n\nHighlighting no longer works for Lucene-style wilcard queries within a function-score.\n\nThis was working ok in v2.3.4, but is not working since v2.4.0\n\nI think this may be related to this change: https://github.com/elastic/elasticsearch/pull/18183\n\nFor example, previously the below was returning highlights ok. Now it still returns results, but without any highlights:\n\n```\n{\n \"highlight\": {\n \"fields\": {\n \"name\": {}\n }\n },\n\n \"query\": {\n \"function_score\": {\n \"script_score\": {\n \"script\": \"(1 + 5 * log(1 + doc['size_active'].value) )\"\n },\n\n \"query\": {\n \"query_string\": {\n \"fields\": [\n \"name\"\n ],\n \"query\": \"microsoft*\"\n }\n }\n\n }\n }\n}\n```\n\nThe same query without the wildcard returns highlights for all hits:\n\n```\n{\n \"highlight\": {\n \"fields\": {\n \"name\": {}\n }\n },\n\n \"query\": {\n \"function_score\": {\n \"script_score\": {\n \"script\": \"(1 + 5 * log(1 + doc['size_active'].value) )\"\n },\n\n \"query\": {\n \"query_string\": {\n \"fields\": [\n \"name\"\n ],\n \"query\": \"microsoft\"\n }\n }\n\n }\n }\n}\n```\n", "comments": [ { "body": "@jimferenczi would you mind taking a look?\n", "created_at": "2016-09-08T19:30:18Z" }, { "body": "@clintongormley, I opened https://github.com/elastic/elasticsearch/pull/20400 \n", "created_at": "2016-09-09T11:05:42Z" }, { "body": "Wildcards support with FunctionScoreQuery is issue is fixed ? If yes in which version of ES it is supported? Is there a patch available to fix this issue in ES 2.4.0 version?", "created_at": "2017-02-14T12:29:25Z" }, { "body": "@vinothkgithub see https://github.com/elastic/elasticsearch/pull/20400 - it's fixed in 2.4.1", "created_at": "2017-02-14T12:30:51Z" }, { "body": "Tried the below query in 2.4.1 and 2.4.4 but it is not highlighting wildcard search in FunctionScoreQuery Any suggestions ?\r\nRequest Query:\r\n\r\n{\r\n \"query\" : {\r\n \"function_score\" : {\r\n \"query\" : {\r\n \"function_score\" : {\r\n \"query\" : {\r\n \"query_string\" : {\r\n \"fields\":[\"content\",\"title\"],\r\n \"query\" : \"rec*\",\r\n \"default_operator\" : \"and\",\r\n \"auto_generate_phrase_queries\" : false\r\n }\r\n },\r\n \"functions\" : [ {\r\n \"filter\" : {\r\n \"match\" : {\r\n \"content\" : {\r\n \"query\" : \"rec*\",\r\n \"type\" : \"phrase\"\r\n }\r\n }\r\n },\r\n \"weight\" : 20.0\r\n }, {\r\n \"filter\" : {\r\n \"match\" : {\r\n \"title\" : {\r\n \"query\" : \"rec*\",\r\n \"type\" : \"phrase\"\r\n }\r\n }\r\n },\r\n \"weight\" : 20.0\r\n } ]\r\n }\r\n },\r\n \"functions\" : [ ]\r\n }\r\n },\r\n \"explain\" : true,\r\n \"fields\" : [\"title\",\"content\"],\r\n \"highlight\" : {\r\n \"fields\" : {\r\n \"content\":{},\r\n \"title\":{}\r\n }\r\n }\r\n}\r\n\r\n\r\n\r\n\r\n\r\nResponse:\r\n{\r\n \"took\": 19,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 3,\r\n \"max_score\": 20,\r\n \"hits\": [\r\n {\r\n \"_shard\": 0,\r\n \"_node\": \"gfwiOjHuR-uvQE8cyhRhQA\",\r\n \"_index\": \"book\",\r\n \"_type\": \"samplebook\",\r\n \"_id\": \"AVo83CXrro3C9qemkLzo\",\r\n \"_score\": 20,\r\n \"fields\": {\r\n \"title\": [\r\n \"book2\"\r\n ],\r\n \"content\": [\r\n \"sample books rec\"\r\n ]\r\n },\r\n \"_explanation\": {\r\n \"value\": 20,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 20,\r\n \"description\": \"function score, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"max of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"content:rec*, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 20,\r\n \"description\": \"min of:\",\r\n \"details\": [\r\n {\r\n \"value\": 20,\r\n \"description\": \"function score, score mode [multiply]\",\r\n \"details\": [\r\n {\r\n \"value\": 20,\r\n \"description\": \"function score, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"match filter: content:rec\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 20,\r\n \"description\": \"product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"constant score 1.0 - no function provided\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 20,\r\n \"description\": \"weight\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 3.4028235e+38,\r\n \"description\": \"maxBoost\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"match on required clause, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"# clause\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"_type:samplebook, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n },\r\n {\r\n \"_shard\": 3,\r\n \"_node\": \"gfwiOjHuR-uvQE8cyhRhQA\",\r\n \"_index\": \"book\",\r\n \"_type\": \"samplebook\",\r\n \"_id\": \"AVo83AThro3C9qemkLzn\",\r\n \"_score\": 1,\r\n \"fields\": {\r\n \"title\": [\r\n \"book\"\r\n ],\r\n \"content\": [\r\n \"sample books recom\"\r\n ]\r\n },\r\n \"_explanation\": {\r\n \"value\": 1,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"max of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"content:rec*, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"match on required clause, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"# clause\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"_type:samplebook, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n },\r\n {\r\n \"_shard\": 4,\r\n \"_node\": \"gfwiOjHuR-uvQE8cyhRhQA\",\r\n \"_index\": \"book\",\r\n \"_type\": \"samplebook\",\r\n \"_id\": \"AVo82-Ufro3C9qemkLzm\",\r\n \"_score\": 1,\r\n \"fields\": {\r\n \"title\": [\r\n \"book\"\r\n ],\r\n \"content\": [\r\n \"sample books recommended\"\r\n ]\r\n },\r\n \"_explanation\": {\r\n \"value\": 1,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"max of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"content:rec*, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"match on required clause, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"# clause\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"_type:samplebook, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n}", "created_at": "2017-02-14T13:57:21Z" }, { "body": "> Tried the below query in 2.4.1 and 2.4.4 but it is not highlighting wildcard search in FunctionScoreQuery Any suggestions ?\r\n\r\nYou have a nested `function_score` inside a `function_score`:\r\n````\r\n\"query\" : {\r\n \"function_score\" : {\r\n \"query\" : {\r\n \"function_score\" : {\r\n````\r\n\r\nAny reason why ? I don't see a use case for this but you're right it's not working. The highlighter extracts the query inside the `function_score` but not recursively. \r\nYou can just remove the parent function score in your recreation and the terms are correctly highlighted.", "created_at": "2017-02-14T14:42:57Z" }, { "body": "Thanks , Tired with one function_score as suggested still it is not highlighting any configuration required?\r\n\r\n{\r\n \"query\" : {\r\n \"function_score\" : {\r\n \"query\" : {\r\n \"query_string\" : {\r\n \"fields\":[\"content\",\"title\"],\r\n \"query\" : \"rec*\",\r\n \"default_operator\" : \"and\",\r\n \"auto_generate_phrase_queries\" : false\r\n }\r\n },\r\n \"functions\" : [ {\r\n \"filter\" : {\r\n \"match\" : {\r\n \"content\" : {\r\n \"query\" : \"rec*\",\r\n \"type\" : \"phrase\"\r\n }\r\n }\r\n },\r\n \"weight\" : 20.0\r\n }, {\r\n \"filter\" : {\r\n \"match\" : {\r\n \"title\" : {\r\n \"query\" : \"rec*\",\r\n \"type\" : \"phrase\"\r\n }\r\n }\r\n },\r\n \"weight\" : 20.0\r\n } ]\r\n }\r\n },\r\n \"explain\" : true,\r\n \"fields\" : [\"title\",\"content\"],\r\n \"highlight\" : {\r\n \"fields\" : {\r\n \"content\":{},\r\n \"title\":{}\r\n }\r\n }\r\n}\r\n\r\n\r\nResponse:\r\n\r\n{\r\n \"took\": 17,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 3,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_shard\": 0,\r\n \"_node\": \"1NPwf1MBTcC4hZTIpiTO8Q\",\r\n \"_index\": \"book\",\r\n \"_type\": \"samplebook\",\r\n \"_id\": \"AVo9aOYUmOpf6AthWiL6\",\r\n \"_score\": 1,\r\n \"fields\": {\r\n \"title\": [\r\n \"samplebook\"\r\n ],\r\n \"content\": [\r\n \"this sample book is recommended\"\r\n ]\r\n },\r\n \"_explanation\": {\r\n \"value\": 1,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"max of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"content:rec*, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"match on required clause, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"# clause\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"_type:samplebook, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n },\r\n {\r\n \"_shard\": 1,\r\n \"_node\": \"1NPwf1MBTcC4hZTIpiTO8Q\",\r\n \"_index\": \"book\",\r\n \"_type\": \"samplebook\",\r\n \"_id\": \"AVo9aTMomOpf6AthWiL7\",\r\n \"_score\": 1,\r\n \"fields\": {\r\n \"title\": [\r\n \"samplebook1\"\r\n ],\r\n \"content\": [\r\n \"this sample book is recom\"\r\n ]\r\n },\r\n \"_explanation\": {\r\n \"value\": 1,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"max of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"content:rec*, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"match on required clause, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"# clause\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"_type:samplebook, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n },\r\n {\r\n \"_shard\": 4,\r\n \"_node\": \"1NPwf1MBTcC4hZTIpiTO8Q\",\r\n \"_index\": \"book\",\r\n \"_type\": \"samplebook\",\r\n \"_id\": \"AVo9aXRBmOpf6AthWiL8\",\r\n \"_score\": 1,\r\n \"fields\": {\r\n \"title\": [\r\n \"samplebook2\"\r\n ],\r\n \"content\": [\r\n \"this sample book is recorded\"\r\n ]\r\n },\r\n \"_explanation\": {\r\n \"value\": 1,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"max of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"content:rec*, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"match on required clause, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"# clause\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"_type:samplebook, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n", "created_at": "2017-02-15T12:56:10Z" }, { "body": "@vinothkgithub your example works with 2.4.4\r\nThe fix in 2.4.1 was incomplete and required another fix ;)\r\nPlease upgrade to 2.4.4", "created_at": "2017-02-15T13:25:56Z" }, { "body": "Thanks, it works Good :-)", "created_at": "2017-02-17T09:00:01Z" }, { "body": "Hello,\r\n\r\nI tested this in ES 2.4.4 and I can still see this issue when I use wild card in query string. \r\nHere is my query\r\n```\r\nPOST workitemsearchshared_0_1/workItemContract/_search\r\n{\r\n \"highlight\": {\r\n \"pre_tags\": [\r\n \"<highlighthit>\"\r\n ],\r\n \"post_tags\": [\r\n \"</highlighthit>\"\r\n ],\r\n \"fields\": {\r\n \"fields.str|title|system$\": {}\r\n },\r\n \"require_field_match\": true\r\n },\r\n \"fields\": [\r\n \"fields.str|title|system$\"\r\n ],\r\n \"query\": {\r\n \"function_score\": {\r\n \"query\": {\r\n \"query_string\": {\r\n \"query\": \"park*\",\r\n \"fields\": [\r\n \"fields.str|title|system$\"\r\n ]\r\n }\r\n },\r\n \"functions\": [\r\n {\r\n \"filter\": {\r\n \"term\": {\r\n \"fields.str|title|system$\": \"parkour\"\r\n }\r\n },\r\n \"weight\": 8\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n**Highlighting working here**\r\n\r\n![highitingworking](https://cloud.githubusercontent.com/assets/5341933/23609530/7e2dda2c-0294-11e7-9762-731b1523dc25.PNG)\r\n\r\n**Highlighting not working here**\r\n\r\n![notworking](https://cloud.githubusercontent.com/assets/5341933/23609531/80776a78-0294-11e7-9809-7a82a41c2b62.PNG)\r\n\r\n**Server build etc.**\r\n![server](https://cloud.githubusercontent.com/assets/5341933/23609642/f6273f64-0294-11e7-9428-77054e1a15d1.PNG)\r\n", "created_at": "2017-03-06T12:20:18Z" } ], "number": 20392, "title": "Highlighting no longer works for Lucene-style wilcard queries within a function-score." }
{ "body": "Since the sub query of a function score query is checked on CustomQueryScorer#extractUnknwonQuery we try to extract the terms from the rewritten form of the sub query.\nMultiTermQuery rewrites queries within a constant score query/weight which returns an empty array when extractTerms is called.\nThe extraction of the inner terms of a constant score query/weight changed in Lucene somewhere between ES version 2.3 and 2.4 (https://issues.apache.org/jira/browse/LUCENE-6425) which is why this problem occurs on ES > 2.3.\nThis change moves the extraction of the sub query from CustomQueryScorer#extractUnknownQuery to CustomQueryScorer#extract in order to do the extraction of the terms on the original form of the sub query.\nThis fixes highlighting of sub queries that extend MultiTermQuery since there is a special path for this kind of query in the QueryScorer (which extract the terms to highlight).\n\nFixes #20392\n", "number": 20400, "review_comments": [], "title": "Fix highlighting of MultiTermQuery within a FunctionScoreQuery" }
{ "commits": [ { "message": "Fix highlighting of MultiTermQuery within a FunctionScoreQuery\n\nSince the sub query of a function score query is checked on CustomQueryScorer#extractUnknwonQuery we try to extract the terms from the rewritten form of the sub query.\nMultiTermQuery rewrites query within a constant score query/weight which returns an empty array when extractTerms is called.\nThe extraction of the inner terms of a constant score query/weight changed in Lucene somewhere between ES version 2.3 and 2.4 (https://issues.apache.org/jira/browse/LUCENE-6425) which is why this problem occurs on ES > 2.3.\nThis change moves the extraction of the sub query from CustomQueryScorer#extractUnknownQuery to CustomQueryScorer#extract in order to do the extraction of the terms on the original form of the sub query.\nThis fixes highlighting of sub queries that extend MultiTermQuery since there is a special path for this kind of query in the QueryScorer (which extract the terms to highlight)." } ], "files": [ { "diff": "@@ -78,10 +78,7 @@ public CustomWeightedSpanTermExtractor(String defaultField) {\n @Override\n protected void extractUnknownQuery(Query query,\n Map<String, WeightedSpanTerm> terms) throws IOException {\n- if (query instanceof FunctionScoreQuery) {\n- query = ((FunctionScoreQuery) query).getSubQuery();\n- extract(query, 1F, terms);\n- } else if (query instanceof FiltersFunctionScoreQuery) {\n+ if (query instanceof FiltersFunctionScoreQuery) {\n query = ((FiltersFunctionScoreQuery) query).getSubQuery();\n extract(query, 1F, terms);\n } else if (terms.isEmpty()) {\n@@ -97,9 +94,11 @@ protected void extract(Query query, float boost, Map<String, WeightedSpanTerm> t\n } else if (query instanceof HasChildQueryBuilder.LateParsingQuery) {\n // skip has_child or has_parent queries, see: https://github.com/elastic/elasticsearch/issues/14999\n return;\n+ } else if (query instanceof FunctionScoreQuery) {\n+ super.extract(((FunctionScoreQuery) query).getSubQuery(), boost, terms);\n+ } else {\n+ super.extract(query, boost, terms);\n }\n-\n- super.extract(query, boost, terms);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/CustomQueryScorer.java", "status": "modified" }, { "diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.index.query.Operator;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder;\n import org.elasticsearch.index.search.MatchQuery;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.rest.RestStatus;\n@@ -2851,4 +2852,21 @@ public void testACopyFieldWithNestedQuery() throws Exception {\n assertThat(field.getFragments()[0].string(), equalTo(\"<em>brown</em>\"));\n assertThat(field.getFragments()[1].string(), equalTo(\"<em>cow</em>\"));\n }\n+\n+ public void testFunctionScoreQueryHighlight() throws Exception {\n+ client().prepareIndex(\"test\", \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"text\", \"brown\").endObject())\n+ .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)\n+ .get();\n+\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(new FunctionScoreQueryBuilder(QueryBuilders.prefixQuery(\"text\", \"bro\")))\n+ .highlighter(new HighlightBuilder()\n+ .field(new Field(\"text\")))\n+ .get();\n+ assertHitCount(searchResponse, 1);\n+ HighlightField field = searchResponse.getHits().getAt(0).highlightFields().get(\"text\");\n+ assertThat(field.getFragments().length, equalTo(1));\n+ assertThat(field.getFragments()[0].string(), equalTo(\"<em>brown</em>\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java", "status": "modified" } ] }
{ "body": "Start two nodes of Elasticsearch, intentionally binding to the same port:\n\n```\n$ bin/elasticsearch -E transport.tcp.port=9300 -E node.max_local_storage_nodes=2\n```\n\nWait for this instance to start, then from another terminal:\n\n```\n$ bin/elasticsearch -E transport.tcp.port=9300 -E node.max_local_storage_nodes=2\n```\n\nThe second node will fail to bind, as expected. However, instead of displaying an already bound exception, Log4j fails in the presence of the security manager and loses the original exception instead producing:\n\n```\n2016-09-02 13:46:33,968 main ERROR An exception occurred processing Appender rolling java.security.AccessControlException: access denied (\"java.lang.RuntimePermission\" \"accessClassInPackage.sun.nio.ch\")\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)\n at java.security.AccessController.checkPermission(AccessController.java:884)\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\n at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1564)\n at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:311)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:357)\n at java.lang.Class.forName0(Native Method)\n at java.lang.Class.forName(Class.java:264)\n at org.apache.logging.log4j.util.LoaderUtil.loadClass(LoaderUtil.java:122)\n at org.apache.logging.log4j.core.util.Loader.loadClass(Loader.java:228)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:496)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:163)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:138)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:117)\n at org.apache.logging.log4j.core.impl.MutableLogEvent.getThrownProxy(MutableLogEvent.java:314)\n at org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:61)\n at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)\n at org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:294)\n at org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:195)\n at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:180)\n at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)\n at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:120)\n at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:113)\n at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:104)\n at org.apache.logging.log4j.core.appender.RollingFileAppender.append(RollingFileAppender.java:86)\n at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:155)\n at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:128)\n at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:119)\n at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)\n at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:390)\n at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:375)\n at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)\n at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:349)\n at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)\n at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)\n at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1988)\n at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1960)\n at org.apache.logging.log4j.spi.AbstractLogger.error(AbstractLogger.java:733)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:281)\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:100)\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:95)\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:88)\n at org.elasticsearch.cli.Command.main(Command.java:54)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:74)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\n```\n\nThis is due to Log4j attempting to load a class that it does not have the permissions to load and this exception goes uncaught.\n\nThis is not the only place that this occurs. Effectively, it can occur anywhere the native-like code is executing (think networking, filesystem access, etc.) and an exception is thrown (there are reports of four different examples of this already, the above is just the simplest reproduction).\n", "comments": [ { "body": "I will submit a patch to Log4j for this. Depending on their release plans, I will add a hack to get around this.\n", "created_at": "2016-09-02T17:50:13Z" }, { "body": "I submitted https://issues.apache.org/jira/browse/LOG4J2-1560 to Log4j.\n", "created_at": "2016-09-02T18:12:38Z" }, { "body": "@jasontedor do we have a workaround for this? Maybe we can remove the blocker label if that is the case?\n", "created_at": "2016-10-07T14:13:37Z" } ], "number": 20304, "title": "Log4j can lose critical exceptions" }
{ "body": "Log4j has a bug where on shutdown it ignores that its use of JMX might\nbe disabled. This leads to Log4j attempting to access JMX leading to a\nsecurity exception. This pull request intentionally introduces jar hell\nwith the Server class to work around this bug until a fix is a\nreleased. We also move Log4j shutdown from Node#stop to Node#close as\nthe former was too early (but was masked by the above Log4j issue) given\nthat we also try to log in Node#close.\n\nRelates #20304\n", "number": 20389, "review_comments": [ { "body": "maybe we can also link to the issue / version this affects here?\n", "created_at": "2016-09-09T13:33:33Z" }, { "body": "There's a [link](https://github.com/elastic/elasticsearch/pull/20389/files#diff-cf110840ff1265b6b449a57fc3f34099R285) the jar hell exemption, is that sufficient?\n", "created_at": "2016-09-09T14:27:15Z" }, { "body": "yes\n", "created_at": "2016-09-09T14:36:43Z" } ], "title": "Logging shutdown hack" }
{ "commits": [ { "message": "Prepare for Log4j hack\n\nThis commit copies Server from Log4j and exempts Server and its inner\nclasses from jar hell checks. This is to prepare for a hack to work\naround a bug in Log4j." }, { "message": "Hack around Log4j bug on shutdown\n\nLog4j has a bug where on shutdown it ignores that JMX might be disabled;\nsince it does not respect this on shutdown, it proceeds to attempt to\naccess JMX leading to a security exception that should have otherwise\nnot occurred had it respected that JMX is disabled. This commit\nintentionally introduces jar hell with the Server class to work around\nthis bug until a fix is released." }, { "message": "Move Log4j shutdown\n\nThis commit moves the Log4j shutdown from Node#stop to Node#close as the\nformer was too early given that we also try to log in Node#close." }, { "message": "Merge branch 'master' into logging-shutdown-hack\n\n* master:\n Disable console logging\n [TEST] make BaseXContentTestCase platform independent (bis)\n Update search-template.asciidoc\n Added warning messages about the dangers of pathological regexes to: * pattern-replace charfilter * pattern-capture and pattern-replace token filters * pattern tokenizer * pattern analyzer\n Update painless.asciidoc\n [TEST] make BaseXContentTestCase platform independent\n Remove FORCE version_type\n id to name\n Add \"version\" field to Templates\n Remove unreleased version, these versons should be added once they are released\n Remove allow unquoted JSON\n Remove assertion for cluster name in data path\n Use a comment block to comment out release notes\n Bumped doc versions to 6.0.0-alpha1\n Add breaking changes for 6.0" }, { "message": "Add test for Log4j shutdown hack\n\nThis commit adds a test that the Log4j shutdown hack successfully\nprevents Log4j from trying to create an MBean server, which requires a\npermission that we do not grant." }, { "message": "Add forbidden APIs suppression to Log4j hack\n\nThis commit adds a forbidden APIs suppression to the Log4j Server class\nhack as the only modification to this class is to work around the bug in\nthe class for when Log4j's usage of JMX is disabled." } ], "files": [ { "diff": "@@ -0,0 +1,392 @@\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache license, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the license for the specific language governing permissions and\n+ * limitations under the license.\n+ */\n+package org.apache.logging.log4j.core.jmx;\n+\n+import java.lang.management.ManagementFactory;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.Set;\n+import java.util.concurrent.Executor;\n+import java.util.concurrent.ExecutorService;\n+import java.util.concurrent.Executors;\n+\n+import javax.management.InstanceAlreadyExistsException;\n+import javax.management.MBeanRegistrationException;\n+import javax.management.MBeanServer;\n+import javax.management.NotCompliantMBeanException;\n+import javax.management.ObjectName;\n+\n+import org.apache.logging.log4j.LogManager;\n+import org.apache.logging.log4j.core.Appender;\n+import org.apache.logging.log4j.core.LoggerContext;\n+import org.apache.logging.log4j.core.appender.AsyncAppender;\n+import org.apache.logging.log4j.core.async.AsyncLoggerConfig;\n+import org.apache.logging.log4j.core.async.AsyncLoggerContext;\n+import org.apache.logging.log4j.core.async.DaemonThreadFactory;\n+import org.apache.logging.log4j.core.config.LoggerConfig;\n+import org.apache.logging.log4j.core.impl.Log4jContextFactory;\n+import org.apache.logging.log4j.core.selector.ContextSelector;\n+import org.apache.logging.log4j.core.util.Constants;\n+import org.apache.logging.log4j.spi.LoggerContextFactory;\n+import org.apache.logging.log4j.status.StatusLogger;\n+import org.apache.logging.log4j.util.PropertiesUtil;\n+import org.elasticsearch.common.SuppressForbidden;\n+\n+/**\n+ * Creates MBeans to instrument various classes in the log4j class hierarchy.\n+ * <p>\n+ * All instrumentation for Log4j 2 classes can be disabled by setting system property {@code -Dlog4j2.disable.jmx=true}.\n+ * </p>\n+ */\n+@SuppressForbidden(reason = \"copied class to hack around Log4j bug\")\n+public final class Server {\n+\n+ /**\n+ * The domain part, or prefix ({@value}) of the {@code ObjectName} of all MBeans that instrument Log4J2 components.\n+ */\n+ public static final String DOMAIN = \"org.apache.logging.log4j2\";\n+ private static final String PROPERTY_DISABLE_JMX = \"log4j2.disable.jmx\";\n+ private static final String PROPERTY_ASYNC_NOTIF = \"log4j2.jmx.notify.async\";\n+ private static final String THREAD_NAME_PREFIX = \"log4j2.jmx.notif\";\n+ private static final StatusLogger LOGGER = StatusLogger.getLogger();\n+ static final Executor executor = isJmxDisabled() ? null : createExecutor();\n+\n+ private Server() {\n+ }\n+\n+ /**\n+ * Returns either a {@code null} Executor (causing JMX notifications to be sent from the caller thread) or a daemon\n+ * background thread Executor, depending on the value of system property \"log4j2.jmx.notify.async\". If this\n+ * property is not set, use a {@code null} Executor for web apps to avoid memory leaks and other issues when the\n+ * web app is restarted.\n+ * @see <a href=\"https://issues.apache.org/jira/browse/LOG4J2-938\">LOG4J2-938</a>\n+ */\n+ private static ExecutorService createExecutor() {\n+ final boolean defaultAsync = !Constants.IS_WEB_APP;\n+ final boolean async = PropertiesUtil.getProperties().getBooleanProperty(PROPERTY_ASYNC_NOTIF, defaultAsync);\n+ return async ? Executors.newFixedThreadPool(1, new DaemonThreadFactory(THREAD_NAME_PREFIX)) : null;\n+ }\n+\n+ /**\n+ * Either returns the specified name as is, or returns a quoted value containing the specified name with the special\n+ * characters (comma, equals, colon, quote, asterisk, or question mark) preceded with a backslash.\n+ *\n+ * @param name the name to escape so it can be used as a value in an {@link ObjectName}.\n+ * @return the escaped name\n+ */\n+ public static String escape(final String name) {\n+ final StringBuilder sb = new StringBuilder(name.length() * 2);\n+ boolean needsQuotes = false;\n+ for (int i = 0; i < name.length(); i++) {\n+ final char c = name.charAt(i);\n+ switch (c) {\n+ case '\\\\':\n+ case '*':\n+ case '?':\n+ case '\\\"':\n+ // quote, star, question & backslash must be escaped\n+ sb.append('\\\\');\n+ needsQuotes = true; // ... and can only appear in quoted value\n+ break;\n+ case ',':\n+ case '=':\n+ case ':':\n+ // no need to escape these, but value must be quoted\n+ needsQuotes = true;\n+ break;\n+ case '\\r':\n+ // drop \\r characters: \\\\r gives \"invalid escape sequence\"\n+ continue;\n+ case '\\n':\n+ // replace \\n characters with \\\\n sequence\n+ sb.append(\"\\\\n\");\n+ needsQuotes = true;\n+ continue;\n+ }\n+ sb.append(c);\n+ }\n+ if (needsQuotes) {\n+ sb.insert(0, '\\\"');\n+ sb.append('\\\"');\n+ }\n+ return sb.toString();\n+ }\n+\n+ private static boolean isJmxDisabled() {\n+ return PropertiesUtil.getProperties().getBooleanProperty(PROPERTY_DISABLE_JMX);\n+ }\n+\n+ public static void reregisterMBeansAfterReconfigure() {\n+ // avoid creating Platform MBean Server if JMX disabled\n+ if (isJmxDisabled()) {\n+ LOGGER.debug(\"JMX disabled for log4j2. Not registering MBeans.\");\n+ return;\n+ }\n+ final MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();\n+ reregisterMBeansAfterReconfigure(mbs);\n+ }\n+\n+ public static void reregisterMBeansAfterReconfigure(final MBeanServer mbs) {\n+ if (isJmxDisabled()) {\n+ LOGGER.debug(\"JMX disabled for log4j2. Not registering MBeans.\");\n+ return;\n+ }\n+\n+ // now provide instrumentation for the newly configured\n+ // LoggerConfigs and Appenders\n+ try {\n+ final ContextSelector selector = getContextSelector();\n+ if (selector == null) {\n+ LOGGER.debug(\"Could not register MBeans: no ContextSelector found.\");\n+ return;\n+ }\n+ LOGGER.trace(\"Reregistering MBeans after reconfigure. Selector={}\", selector);\n+ final List<LoggerContext> contexts = selector.getLoggerContexts();\n+ int i = 0;\n+ for (final LoggerContext ctx : contexts) {\n+ LOGGER.trace(\"Reregistering context ({}/{}): '{}' {}\", ++i, contexts.size(), ctx.getName(), ctx);\n+ // first unregister the context and all nested loggers,\n+ // appenders, statusLogger, contextSelector, ringbuffers...\n+ unregisterLoggerContext(ctx.getName(), mbs);\n+\n+ final LoggerContextAdmin mbean = new LoggerContextAdmin(ctx, executor);\n+ register(mbs, mbean, mbean.getObjectName());\n+\n+ if (ctx instanceof AsyncLoggerContext) {\n+ final RingBufferAdmin rbmbean = ((AsyncLoggerContext) ctx).createRingBufferAdmin();\n+ if (rbmbean.getBufferSize() > 0) {\n+ // don't register if Disruptor not started (DefaultConfiguration: config not found)\n+ register(mbs, rbmbean, rbmbean.getObjectName());\n+ }\n+ }\n+\n+ // register the status logger and the context selector\n+ // repeatedly\n+ // for each known context: if one context is unregistered,\n+ // these MBeans should still be available for the other\n+ // contexts.\n+ registerStatusLogger(ctx.getName(), mbs, executor);\n+ registerContextSelector(ctx.getName(), selector, mbs, executor);\n+\n+ registerLoggerConfigs(ctx, mbs, executor);\n+ registerAppenders(ctx, mbs, executor);\n+ }\n+ } catch (final Exception ex) {\n+ LOGGER.error(\"Could not register mbeans\", ex);\n+ }\n+ }\n+\n+ /**\n+ * Unregister all log4j MBeans from the platform MBean server.\n+ */\n+ public static void unregisterMBeans() {\n+ if (isJmxDisabled()) {\n+ LOGGER.debug(\"JMX disabled for Log4j2. Not unregistering MBeans.\");\n+ return;\n+ }\n+ final MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();\n+ unregisterMBeans(mbs);\n+ }\n+\n+ /**\n+ * Unregister all log4j MBeans from the specified MBean server.\n+ *\n+ * @param mbs the MBean server to unregister from.\n+ */\n+ public static void unregisterMBeans(final MBeanServer mbs) {\n+ unregisterStatusLogger(\"*\", mbs);\n+ unregisterContextSelector(\"*\", mbs);\n+ unregisterContexts(mbs);\n+ unregisterLoggerConfigs(\"*\", mbs);\n+ unregisterAsyncLoggerRingBufferAdmins(\"*\", mbs);\n+ unregisterAsyncLoggerConfigRingBufferAdmins(\"*\", mbs);\n+ unregisterAppenders(\"*\", mbs);\n+ unregisterAsyncAppenders(\"*\", mbs);\n+ }\n+\n+ /**\n+ * Returns the {@code ContextSelector} of the current {@code Log4jContextFactory}.\n+ *\n+ * @return the {@code ContextSelector} of the current {@code Log4jContextFactory}\n+ */\n+ private static ContextSelector getContextSelector() {\n+ final LoggerContextFactory factory = LogManager.getFactory();\n+ if (factory instanceof Log4jContextFactory) {\n+ final ContextSelector selector = ((Log4jContextFactory) factory).getSelector();\n+ return selector;\n+ }\n+ return null;\n+ }\n+\n+ /**\n+ * Unregisters all MBeans associated with the specified logger context (including MBeans for {@code LoggerConfig}s\n+ * and {@code Appender}s from the platform MBean server.\n+ *\n+ * @param loggerContextName name of the logger context to unregister\n+ */\n+ public static void unregisterLoggerContext(final String loggerContextName) {\n+ if (isJmxDisabled()) {\n+ LOGGER.debug(\"JMX disabled for Log4j2. Not unregistering MBeans.\");\n+ return;\n+ }\n+ final MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();\n+ unregisterLoggerContext(loggerContextName, mbs);\n+ }\n+\n+ /**\n+ * Unregisters all MBeans associated with the specified logger context (including MBeans for {@code LoggerConfig}s\n+ * and {@code Appender}s from the platform MBean server.\n+ *\n+ * @param contextName name of the logger context to unregister\n+ * @param mbs the MBean Server to unregister the instrumented objects from\n+ */\n+ public static void unregisterLoggerContext(final String contextName, final MBeanServer mbs) {\n+ final String pattern = LoggerContextAdminMBean.PATTERN;\n+ final String search = String.format(pattern, escape(contextName), \"*\");\n+ unregisterAllMatching(search, mbs); // unregister context mbean\n+\n+ // now unregister all MBeans associated with this logger context\n+ unregisterStatusLogger(contextName, mbs);\n+ unregisterContextSelector(contextName, mbs);\n+ unregisterLoggerConfigs(contextName, mbs);\n+ unregisterAppenders(contextName, mbs);\n+ unregisterAsyncAppenders(contextName, mbs);\n+ unregisterAsyncLoggerRingBufferAdmins(contextName, mbs);\n+ unregisterAsyncLoggerConfigRingBufferAdmins(contextName, mbs);\n+ }\n+\n+ private static void registerStatusLogger(final String contextName, final MBeanServer mbs, final Executor executor)\n+ throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {\n+\n+ final StatusLoggerAdmin mbean = new StatusLoggerAdmin(contextName, executor);\n+ register(mbs, mbean, mbean.getObjectName());\n+ }\n+\n+ private static void registerContextSelector(final String contextName, final ContextSelector selector,\n+ final MBeanServer mbs, final Executor executor) throws InstanceAlreadyExistsException,\n+ MBeanRegistrationException, NotCompliantMBeanException {\n+\n+ final ContextSelectorAdmin mbean = new ContextSelectorAdmin(contextName, selector);\n+ register(mbs, mbean, mbean.getObjectName());\n+ }\n+\n+ private static void unregisterStatusLogger(final String contextName, final MBeanServer mbs) {\n+ final String pattern = StatusLoggerAdminMBean.PATTERN;\n+ final String search = String.format(pattern, escape(contextName), \"*\");\n+ unregisterAllMatching(search, mbs);\n+ }\n+\n+ private static void unregisterContextSelector(final String contextName, final MBeanServer mbs) {\n+ final String pattern = ContextSelectorAdminMBean.PATTERN;\n+ final String search = String.format(pattern, escape(contextName), \"*\");\n+ unregisterAllMatching(search, mbs);\n+ }\n+\n+ private static void unregisterLoggerConfigs(final String contextName, final MBeanServer mbs) {\n+ final String pattern = LoggerConfigAdminMBean.PATTERN;\n+ final String search = String.format(pattern, escape(contextName), \"*\");\n+ unregisterAllMatching(search, mbs);\n+ }\n+\n+ private static void unregisterContexts(final MBeanServer mbs) {\n+ final String pattern = LoggerContextAdminMBean.PATTERN;\n+ final String search = String.format(pattern, \"*\");\n+ unregisterAllMatching(search, mbs);\n+ }\n+\n+ private static void unregisterAppenders(final String contextName, final MBeanServer mbs) {\n+ final String pattern = AppenderAdminMBean.PATTERN;\n+ final String search = String.format(pattern, escape(contextName), \"*\");\n+ unregisterAllMatching(search, mbs);\n+ }\n+\n+ private static void unregisterAsyncAppenders(final String contextName, final MBeanServer mbs) {\n+ final String pattern = AsyncAppenderAdminMBean.PATTERN;\n+ final String search = String.format(pattern, escape(contextName), \"*\");\n+ unregisterAllMatching(search, mbs);\n+ }\n+\n+ private static void unregisterAsyncLoggerRingBufferAdmins(final String contextName, final MBeanServer mbs) {\n+ final String pattern1 = RingBufferAdminMBean.PATTERN_ASYNC_LOGGER;\n+ final String search1 = String.format(pattern1, escape(contextName));\n+ unregisterAllMatching(search1, mbs);\n+ }\n+\n+ private static void unregisterAsyncLoggerConfigRingBufferAdmins(final String contextName, final MBeanServer mbs) {\n+ final String pattern2 = RingBufferAdminMBean.PATTERN_ASYNC_LOGGER_CONFIG;\n+ final String search2 = String.format(pattern2, escape(contextName), \"*\");\n+ unregisterAllMatching(search2, mbs);\n+ }\n+\n+ private static void unregisterAllMatching(final String search, final MBeanServer mbs) {\n+ try {\n+ final ObjectName pattern = new ObjectName(search);\n+ final Set<ObjectName> found = mbs.queryNames(pattern, null);\n+ if (found.isEmpty()) {\n+ LOGGER.trace(\"Unregistering but no MBeans found matching '{}'\", search);\n+ } else {\n+ LOGGER.trace(\"Unregistering {} MBeans: {}\", found.size(), found);\n+ }\n+ for (final ObjectName objectName : found) {\n+ mbs.unregisterMBean(objectName);\n+ }\n+ } catch (final Exception ex) {\n+ LOGGER.error(\"Could not unregister MBeans for \" + search, ex);\n+ }\n+ }\n+\n+ private static void registerLoggerConfigs(final LoggerContext ctx, final MBeanServer mbs, final Executor executor)\n+ throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {\n+\n+ final Map<String, LoggerConfig> map = ctx.getConfiguration().getLoggers();\n+ for (final String name : map.keySet()) {\n+ final LoggerConfig cfg = map.get(name);\n+ final LoggerConfigAdmin mbean = new LoggerConfigAdmin(ctx, cfg);\n+ register(mbs, mbean, mbean.getObjectName());\n+\n+ if (cfg instanceof AsyncLoggerConfig) {\n+ final AsyncLoggerConfig async = (AsyncLoggerConfig) cfg;\n+ final RingBufferAdmin rbmbean = async.createRingBufferAdmin(ctx.getName());\n+ register(mbs, rbmbean, rbmbean.getObjectName());\n+ }\n+ }\n+ }\n+\n+ private static void registerAppenders(final LoggerContext ctx, final MBeanServer mbs, final Executor executor)\n+ throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {\n+\n+ final Map<String, Appender> map = ctx.getConfiguration().getAppenders();\n+ for (final String name : map.keySet()) {\n+ final Appender appender = map.get(name);\n+\n+ if (appender instanceof AsyncAppender) {\n+ final AsyncAppender async = ((AsyncAppender) appender);\n+ final AsyncAppenderAdmin mbean = new AsyncAppenderAdmin(ctx.getName(), async);\n+ register(mbs, mbean, mbean.getObjectName());\n+ } else {\n+ final AppenderAdmin mbean = new AppenderAdmin(ctx.getName(), appender);\n+ register(mbs, mbean, mbean.getObjectName());\n+ }\n+ }\n+ }\n+\n+ private static void register(final MBeanServer mbs, final Object mbean, final ObjectName objectName)\n+ throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {\n+ LOGGER.debug(\"Registering MBean {}\", objectName);\n+ mbs.registerMBean(mbean, objectName);\n+ }\n+}", "filename": "core/src/main/java/org/apache/logging/log4j/core/jmx/Server.java", "status": "added" }, { "diff": "@@ -279,6 +279,12 @@ static void checkClass(Map<String,Path> clazzes, String clazz, Path jarpath) {\n * cf. https://issues.apache.org/jira/browse/LOG4J2-1560\n */\n return;\n+ } else if (clazz.startsWith(\"org.apache.logging.log4j.core.jmx.Server\")) {\n+ /*\n+ * deliberate to hack around a bug in Log4j\n+ * cf. https://issues.apache.org/jira/browse/LOG4J2-1506\n+ */\n+ return;\n }\n throw new IllegalStateException(\"jar hell!\" + System.lineSeparator() +\n \"class: \" + clazz + System.lineSeparator() +", "filename": "core/src/main/java/org/elasticsearch/bootstrap/JarHell.java", "status": "modified" }, { "diff": "@@ -600,24 +600,6 @@ private Node stop() {\n injector.getInstance(IndicesService.class).stop();\n logger.info(\"stopped\");\n \n- final String log4jShutdownEnabled = System.getProperty(\"es.log4j.shutdownEnabled\", \"true\");\n- final boolean shutdownEnabled;\n- switch (log4jShutdownEnabled) {\n- case \"true\":\n- shutdownEnabled = true;\n- break;\n- case \"false\":\n- shutdownEnabled = false;\n- break;\n- default:\n- throw new IllegalArgumentException(\n- \"invalid value for [es.log4j.shutdownEnabled], was [\" + log4jShutdownEnabled + \"] but must be [true] or [false]\");\n- }\n- if (shutdownEnabled) {\n- LoggerContext context = (LoggerContext) LogManager.getContext(false);\n- Configurator.shutdown(context);\n- }\n-\n return this;\n }\n \n@@ -709,6 +691,24 @@ public synchronized void close() throws IOException {\n }\n IOUtils.close(toClose);\n logger.info(\"closed\");\n+\n+ final String log4jShutdownEnabled = System.getProperty(\"es.log4j.shutdownEnabled\", \"true\");\n+ final boolean shutdownEnabled;\n+ switch (log4jShutdownEnabled) {\n+ case \"true\":\n+ shutdownEnabled = true;\n+ break;\n+ case \"false\":\n+ shutdownEnabled = false;\n+ break;\n+ default:\n+ throw new IllegalArgumentException(\n+ \"invalid value for [es.log4j.shutdownEnabled], was [\" + log4jShutdownEnabled + \"] but must be [true] or [false]\");\n+ }\n+ if (shutdownEnabled) {\n+ LoggerContext context = (LoggerContext) LogManager.getContext(false);\n+ Configurator.shutdown(context);\n+ }\n }\n \n ", "filename": "core/src/main/java/org/elasticsearch/node/Node.java", "status": "modified" }, { "diff": "@@ -117,6 +117,12 @@ public void testLog4jThrowableProxyLeniency() throws Exception {\n JarHell.checkJarHell(jars);\n }\n \n+ public void testLog4jServerLeniency() throws Exception {\n+ Path dir = createTempDir();\n+ URL[] jars = {makeJar(dir, \"foo.jar\", null, \"org.apache.logging.log4j.core.jmx.Server.class\"), makeJar(dir, \"bar.jar\", null, \"org.apache.logging.log4j.core.jmx.Server.class\")};\n+ JarHell.checkJarHell(jars);\n+ }\n+\n public void testWithinSingleJar() throws Exception {\n // the java api for zip file does not allow creating duplicate entries (good!) so\n // this bogus jar had to be constructed with ant", "filename": "core/src/test/java/org/elasticsearch/bootstrap/JarHellTests.java", "status": "modified" }, { "diff": "@@ -27,21 +27,24 @@\n import org.apache.logging.log4j.core.appender.ConsoleAppender;\n import org.apache.logging.log4j.core.appender.CountingNoOpAppender;\n import org.apache.logging.log4j.core.config.Configurator;\n-import org.elasticsearch.Version;\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.hamcrest.RegexMatcher;\n \n+import javax.management.MBeanServerPermission;\n+\n import java.io.IOException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n+import java.security.AccessControlException;\n+import java.security.Permission;\n import java.util.List;\n+import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.regex.Matcher;\n import java.util.regex.Pattern;\n \n-import static com.vividsolutions.jts.geom.Dimension.L;\n import static org.hamcrest.Matchers.equalTo;\n \n public class EvilLoggerTests extends ESTestCase {\n@@ -121,4 +124,45 @@ public void testFindAppender() {\n assertThat(countingNoOpAppender.getName(), equalTo(\"counting_no_op\"));\n }\n \n+ public void testLog4jShutdownHack() {\n+ final AtomicBoolean denied = new AtomicBoolean();\n+ final SecurityManager sm = System.getSecurityManager();\n+ try {\n+ System.setSecurityManager(new SecurityManager() {\n+ @Override\n+ public void checkPermission(Permission perm) {\n+ if (perm instanceof RuntimePermission && \"setSecurityManager\".equals(perm.getName())) {\n+ // so we can restore the security manager at the end of the test\n+ return;\n+ }\n+ if (perm instanceof MBeanServerPermission && \"createMBeanServer\".equals(perm.getName())) {\n+ // without the hack in place, Log4j will try to get an MBean server which we will deny\n+ // with the hack in place, this permission should never be requested by Log4j\n+ denied.set(true);\n+ throw new AccessControlException(\"denied\");\n+ }\n+ super.checkPermission(perm);\n+ }\n+\n+ @Override\n+ public void checkPropertyAccess(String key) {\n+ // so that Log4j can check if its usage of JMX is disabled or not\n+ if (\"log4j2.disable.jmx\".equals(key)) {\n+ return;\n+ }\n+ super.checkPropertyAccess(key);\n+ }\n+ });\n+\n+ // this will trigger the bug without the hack\n+ LoggerContext context = (LoggerContext) LogManager.getContext(false);\n+ Configurator.shutdown(context);\n+\n+ // Log4j should have never requested permissions to create an MBean server\n+ assertFalse(denied.get());\n+ } finally {\n+ System.setSecurityManager(sm);\n+ }\n+ }\n+\n }", "filename": "qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.0.0_alpha5\n\n**Plugins installed**: x-pack 5.0.0_alpha5\n\n**JVM version**: OpenJDK Runtime Environment (build 1.8.0_101-b13)\n\n**OS version**: Linux 4.4.11-23.53.amzn1.x86_64\n\n**Description of the problem including expected versus actual behavior**:\n\nElasticsearch failed to start with the following error message:\n[WARN ][bootstrap ] [elk5a_node01] uncaught exception in thread [main]\norg.elasticsearch.bootstrap.StartupError: java.lang.IllegalArgumentException: failed to parse setting [indices.fielddata.cache.size] with value [50%] as a size in bytes: unit is missing or unrecognized\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:105)\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:96)\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:88)\n at org.elasticsearch.cli.Command.main(Command.java:54)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:75)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:68)\nCaused by: java.lang.IllegalArgumentException: failed to parse setting [indices.fielddata.cache.size] with value [50%] as a size in bytes: unit is missing or unrecognized\n at org.elasticsearch.common.settings.Setting.get(Setting.java:307)\n at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:272)\n at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:238)\n at org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:138)\n at org.elasticsearch.node.Node.<init>(Node.java:298)\n at org.elasticsearch.node.Node.<init>(Node.java:206)\n at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:175)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:175)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:255)\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:101)\n ... 6 more\nCaused by: ElasticsearchParseException[failed to parse setting [indices.fielddata.cache.size] with value [50%] as a size in bytes: unit is missing or unrecognized]\n at org.elasticsearch.common.unit.ByteSizeValue.parseBytesSizeValue(ByteSizeValue.java:213)\n at org.elasticsearch.common.unit.ByteSizeValue.parseBytesSizeValue(ByteSizeValue.java:172)\n at org.elasticsearch.common.settings.Setting.lambda$byteSizeSetting$18(Setting.java:571)\n at org.elasticsearch.common.settings.Setting.get(Setting.java:305)\n ... 15 more\n\n**Steps to reproduce**:\n1. Add the following lines to elasticsearch.yml\n\nindices.fielddata.cache.size: \"50%\"\nindices.queries.cache.size: \"20%\"\n1. Restart elasticsearch\n", "comments": [ { "body": "I reproduced this on the current master branch. It looks like this (ability to specify the cache sizes as a percentage) might have been accidentally removed when we cut over to the new settings infrastructure.\n", "created_at": "2016-09-06T08:20:15Z" } ], "number": 20330, "title": "failed to parse setting: percentage value not recognized." }
{ "body": "During adding the new settings infrastructure the option to specify the\nsize of the filter cache as a percentage of the heap size which accidentally\nremoved. This change adds that ability back.\n\nIn addition the `Setting` class had multiple `.byteSizeSetting` methods\nwhich all except one used `ByteSizeValue.parseBytesSizeValue` to parse\nthe value. One method used `MemorySizeValue.parseBytesSizeValueOrHeapRatio`.\nThis was confusing as the way the value was parsed depended on how many\narguments were provided.\n\nThis change makes all `Setting.byteSizeSetting` methods parse the value\nthe same way using `ByteSizeValue.parseBytesSizeValue` and adds\n`Setting.memorySizeSetting` methods to parse settings that express memory\nsizes (i.e. can be absolute bytes values or percentages). Relevant settings\nhave been moved to use these new methods.\n\nCloses #20330\n", "number": 20335, "review_comments": [ { "body": "Should these two settings have the ability to specify as a percentage of heap as well as absolute bytes?\n", "created_at": "2016-09-06T09:08:39Z" }, { "body": "Can we remove the commented out code?\n", "created_at": "2016-09-06T09:09:04Z" }, { "body": "Nit: Can we reformat this by just moving everything after the equals sign to the next line with continuation indentation so that all the parameters and the `Setting.memorySizeSetting` invocation should fit on one line then?\n", "created_at": "2016-09-06T09:10:43Z" }, { "body": "Oops, I thought I had :)\n", "created_at": "2016-09-06T09:10:52Z" }, { "body": "Same thing here, move the invocation to the next line with continuation indentation so that the invocation and its parameters are together?\n", "created_at": "2016-09-06T09:11:47Z" }, { "body": "I don't think so, I think these should be bytes or size-value only.\n", "created_at": "2016-09-06T09:18:04Z" }, { "body": "can we get some javadocs here and also document what the differences are to the `byteSizeSetting` methods\n", "created_at": "2016-09-06T12:13:43Z" }, { "body": "++ to keep byteSizeSetting here\n", "created_at": "2016-09-06T12:14:22Z" }, { "body": "I'd also appreciate some tests here for these methods and also for the settings that are supposed to accept percentages\n", "created_at": "2016-09-06T12:16:36Z" }, { "body": "Not sure if these tests are better here in one place or if they should be in separate test classes for each class that defines one of the settings?\n", "created_at": "2016-09-06T13:32:07Z" } ], "title": "Fix filter cache setting to allow percentages" }
{ "commits": [ { "message": "Fix filter cache setting to allow percentages\n\nDuring adding the new settings infrastructure the option to specify the\nsize of the filter cache as a percentage of the heap size which accidentally\nremoved. This change adds that ability back.\n\nIn addition the `Setting` class had multiple `.byteSizeSetting` methods\nwhich all except one used `ByteSizeValue.parseBytesSizeValue` to parse\nthe value. One method used `MemorySizeValue.parseBytesSizeValueOrHeapRatio`.\nThis was confusing as the way the value was parsed depended on how many\narguments were provided.\n\nThis change makes all `Setting.byteSizeSetting` methods parse the value\nthe same way using `ByteSizeValue.parseBytesSizeValue` and adds\n`Setting.memorySizeSetting` methods to parse settings that express memory\nsizes (i.e. can be absolute bytes values or percentages). Relevant settings\nhave been moved to use these new methods.\n\nCloses #20330" } ], "files": [ { "diff": "@@ -551,10 +551,6 @@ public static Setting<Boolean> boolSetting(String key, Function<Settings, String\n return new Setting<>(key, defaultValueFn, Booleans::parseBooleanExact, properties);\n }\n \n- public static Setting<ByteSizeValue> byteSizeSetting(String key, String percentage, Property... properties) {\n- return new Setting<>(key, (s) -> percentage, (s) -> MemorySizeValue.parseBytesSizeValueOrHeapRatio(s, key), properties);\n- }\n-\n public static Setting<ByteSizeValue> byteSizeSetting(String key, ByteSizeValue value, Property... properties) {\n return byteSizeSetting(key, (s) -> value.toString(), properties);\n }\n@@ -591,6 +587,49 @@ public static ByteSizeValue parseByteSize(String s, ByteSizeValue minValue, Byte\n return value;\n }\n \n+ /**\n+ * Creates a setting which specifies a memory size. This can either be\n+ * specified as an absolute bytes value or as a percentage of the heap\n+ * memory.\n+ * \n+ * @param key the key for the setting\n+ * @param defaultValue the default value for this setting \n+ * @param properties properties properties for this setting like scope, filtering...\n+ * @return the setting object\n+ */\n+ public static Setting<ByteSizeValue> memorySizeSetting(String key, ByteSizeValue defaultValue, Property... properties) {\n+ return memorySizeSetting(key, (s) -> defaultValue.toString(), properties);\n+ }\n+\n+\n+ /**\n+ * Creates a setting which specifies a memory size. This can either be\n+ * specified as an absolute bytes value or as a percentage of the heap\n+ * memory.\n+ * \n+ * @param key the key for the setting\n+ * @param defaultValue a function that supplies the default value for this setting \n+ * @param properties properties properties for this setting like scope, filtering...\n+ * @return the setting object\n+ */\n+ public static Setting<ByteSizeValue> memorySizeSetting(String key, Function<Settings, String> defaultValue, Property... properties) {\n+ return new Setting<>(key, defaultValue, (s) -> MemorySizeValue.parseBytesSizeValueOrHeapRatio(s, key), properties);\n+ }\n+\n+ /**\n+ * Creates a setting which specifies a memory size. This can either be\n+ * specified as an absolute bytes value or as a percentage of the heap\n+ * memory.\n+ * \n+ * @param key the key for the setting\n+ * @param defaultPercentage the default value of this setting as a percentage of the heap memory\n+ * @param properties properties properties for this setting like scope, filtering...\n+ * @return the setting object\n+ */\n+ public static Setting<ByteSizeValue> memorySizeSetting(String key, String defaultPercentage, Property... properties) {\n+ return new Setting<>(key, (s) -> defaultPercentage, (s) -> MemorySizeValue.parseBytesSizeValueOrHeapRatio(s, key), properties);\n+ }\n+\n public static Setting<TimeValue> positiveTimeSetting(String key, TimeValue defaultValue, Property... properties) {\n return timeSetting(key, defaultValue, TimeValue.timeValueMillis(0), properties);\n }", "filename": "core/src/main/java/org/elasticsearch/common/settings/Setting.java", "status": "modified" }, { "diff": "@@ -46,7 +46,7 @@ public class PageCacheRecycler extends AbstractComponent implements Releasable {\n public static final Setting<Type> TYPE_SETTING =\n new Setting<>(\"cache.recycler.page.type\", Type.CONCURRENT.name(), Type::parse, Property.NodeScope);\n public static final Setting<ByteSizeValue> LIMIT_HEAP_SETTING =\n- Setting.byteSizeSetting(\"cache.recycler.page.limit.heap\", \"10%\", Property.NodeScope);\n+ Setting.memorySizeSetting(\"cache.recycler.page.limit.heap\", \"10%\", Property.NodeScope);\n public static final Setting<Double> WEIGHT_BYTES_SETTING =\n Setting.doubleSetting(\"cache.recycler.page.weight.bytes\", 1d, 0d, Property.NodeScope);\n public static final Setting<Double> WEIGHT_LONG_SETTING =", "filename": "core/src/main/java/org/elasticsearch/common/util/PageCacheRecycler.java", "status": "modified" }, { "diff": "@@ -52,7 +52,8 @@\n public class IndexingMemoryController extends AbstractComponent implements IndexingOperationListener, Closeable {\n \n /** How much heap (% or bytes) we will share across all actively indexing shards on this node (default: 10%). */\n- public static final Setting<ByteSizeValue> INDEX_BUFFER_SIZE_SETTING = Setting.byteSizeSetting(\"indices.memory.index_buffer_size\", \"10%\", Property.NodeScope);\n+ public static final Setting<ByteSizeValue> INDEX_BUFFER_SIZE_SETTING = \n+ Setting.memorySizeSetting(\"indices.memory.index_buffer_size\", \"10%\", Property.NodeScope);\n \n /** Only applies when <code>indices.memory.index_buffer_size</code> is a %, to set a floor on the actual size in bytes (default: 48 MB). */\n public static final Setting<ByteSizeValue> MIN_INDEX_BUFFER_SIZE_SETTING = Setting.byteSizeSetting(\"indices.memory.min_index_buffer_size\",", "filename": "core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java", "status": "modified" }, { "diff": "@@ -49,13 +49,13 @@\n \n public class IndicesQueryCache extends AbstractComponent implements QueryCache, Closeable {\n \n- public static final Setting<ByteSizeValue> INDICES_CACHE_QUERY_SIZE_SETTING = Setting.byteSizeSetting(\n- \"indices.queries.cache.size\", \"10%\", Property.NodeScope);\n- public static final Setting<Integer> INDICES_CACHE_QUERY_COUNT_SETTING = Setting.intSetting(\n- \"indices.queries.cache.count\", 10000, 1, Property.NodeScope);\n+ public static final Setting<ByteSizeValue> INDICES_CACHE_QUERY_SIZE_SETTING = \n+ Setting.memorySizeSetting(\"indices.queries.cache.size\", \"10%\", Property.NodeScope);\n+ public static final Setting<Integer> INDICES_CACHE_QUERY_COUNT_SETTING = \n+ Setting.intSetting(\"indices.queries.cache.count\", 10000, 1, Property.NodeScope);\n // enables caching on all segments instead of only the larger ones, for testing only\n- public static final Setting<Boolean> INDICES_QUERIES_CACHE_ALL_SEGMENTS_SETTING = Setting.boolSetting(\n- \"indices.queries.cache.all_segments\", false, Property.NodeScope);\n+ public static final Setting<Boolean> INDICES_QUERIES_CACHE_ALL_SEGMENTS_SETTING = \n+ Setting.boolSetting(\"indices.queries.cache.all_segments\", false, Property.NodeScope);\n \n private final LRUQueryCache cache;\n private final ShardCoreKeyMap shardKeyMap = new ShardCoreKeyMap();", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesQueryCache.java", "status": "modified" }, { "diff": "@@ -72,7 +72,7 @@ public final class IndicesRequestCache extends AbstractComponent implements Remo\n public static final Setting<Boolean> INDEX_CACHE_REQUEST_ENABLED_SETTING =\n Setting.boolSetting(\"index.requests.cache.enable\", true, Property.Dynamic, Property.IndexScope);\n public static final Setting<ByteSizeValue> INDICES_CACHE_QUERY_SIZE =\n- Setting.byteSizeSetting(\"indices.requests.cache.size\", \"1%\", Property.NodeScope);\n+ Setting.memorySizeSetting(\"indices.requests.cache.size\", \"1%\", Property.NodeScope);\n public static final Setting<TimeValue> INDICES_CACHE_QUERY_EXPIRE =\n Setting.positiveTimeSetting(\"indices.requests.cache.expire\", new TimeValue(0), Property.NodeScope);\n ", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesRequestCache.java", "status": "modified" }, { "diff": "@@ -47,24 +47,24 @@ public class HierarchyCircuitBreakerService extends CircuitBreakerService {\n private final ConcurrentMap<String, CircuitBreaker> breakers = new ConcurrentHashMap<>();\n \n public static final Setting<ByteSizeValue> TOTAL_CIRCUIT_BREAKER_LIMIT_SETTING =\n- Setting.byteSizeSetting(\"indices.breaker.total.limit\", \"70%\", Property.Dynamic, Property.NodeScope);\n+ Setting.memorySizeSetting(\"indices.breaker.total.limit\", \"70%\", Property.Dynamic, Property.NodeScope);\n \n public static final Setting<ByteSizeValue> FIELDDATA_CIRCUIT_BREAKER_LIMIT_SETTING =\n- Setting.byteSizeSetting(\"indices.breaker.fielddata.limit\", \"60%\", Property.Dynamic, Property.NodeScope);\n+ Setting.memorySizeSetting(\"indices.breaker.fielddata.limit\", \"60%\", Property.Dynamic, Property.NodeScope);\n public static final Setting<Double> FIELDDATA_CIRCUIT_BREAKER_OVERHEAD_SETTING =\n Setting.doubleSetting(\"indices.breaker.fielddata.overhead\", 1.03d, 0.0d, Property.Dynamic, Property.NodeScope);\n public static final Setting<CircuitBreaker.Type> FIELDDATA_CIRCUIT_BREAKER_TYPE_SETTING =\n new Setting<>(\"indices.breaker.fielddata.type\", \"memory\", CircuitBreaker.Type::parseValue, Property.NodeScope);\n \n public static final Setting<ByteSizeValue> REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING =\n- Setting.byteSizeSetting(\"indices.breaker.request.limit\", \"60%\", Property.Dynamic, Property.NodeScope);\n+ Setting.memorySizeSetting(\"indices.breaker.request.limit\", \"60%\", Property.Dynamic, Property.NodeScope);\n public static final Setting<Double> REQUEST_CIRCUIT_BREAKER_OVERHEAD_SETTING =\n Setting.doubleSetting(\"indices.breaker.request.overhead\", 1.0d, 0.0d, Property.Dynamic, Property.NodeScope);\n public static final Setting<CircuitBreaker.Type> REQUEST_CIRCUIT_BREAKER_TYPE_SETTING =\n new Setting<>(\"indices.breaker.request.type\", \"memory\", CircuitBreaker.Type::parseValue, Property.NodeScope);\n \n public static final Setting<ByteSizeValue> IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_LIMIT_SETTING =\n- Setting.byteSizeSetting(\"network.breaker.inflight_requests.limit\", \"100%\", Property.Dynamic, Property.NodeScope);\n+ Setting.memorySizeSetting(\"network.breaker.inflight_requests.limit\", \"100%\", Property.Dynamic, Property.NodeScope);\n public static final Setting<Double> IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_OVERHEAD_SETTING =\n Setting.doubleSetting(\"network.breaker.inflight_requests.overhead\", 1.0d, 0.0d, Property.Dynamic, Property.NodeScope);\n public static final Setting<CircuitBreaker.Type> IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_TYPE_SETTING =", "filename": "core/src/main/java/org/elasticsearch/indices/breaker/HierarchyCircuitBreakerService.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@\n public class IndicesFieldDataCache extends AbstractComponent implements RemovalListener<IndicesFieldDataCache.Key, Accountable>, Releasable{\n \n public static final Setting<ByteSizeValue> INDICES_FIELDDATA_CACHE_SIZE_KEY =\n- Setting.byteSizeSetting(\"indices.fielddata.cache.size\", new ByteSizeValue(-1), Property.NodeScope);\n+ Setting.memorySizeSetting(\"indices.fielddata.cache.size\", new ByteSizeValue(-1), Property.NodeScope);\n private final IndexFieldDataCache.Listener indicesFieldDataCacheListener;\n private final Cache<Key, Accountable> cache;\n ", "filename": "core/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java", "status": "modified" }, { "diff": "@@ -54,9 +54,9 @@ public class FsRepository extends BlobStoreRepository {\n public static final Setting<String> REPOSITORIES_LOCATION_SETTING =\n new Setting<>(\"repositories.fs.location\", LOCATION_SETTING, Function.identity(), Property.NodeScope);\n public static final Setting<ByteSizeValue> CHUNK_SIZE_SETTING =\n- Setting.byteSizeSetting(\"chunk_size\", \"-1\", Property.NodeScope);\n+ Setting.byteSizeSetting(\"chunk_size\", new ByteSizeValue(-1), Property.NodeScope);\n public static final Setting<ByteSizeValue> REPOSITORIES_CHUNK_SIZE_SETTING =\n- Setting.byteSizeSetting(\"repositories.fs.chunk_size\", \"-1\", Property.NodeScope);\n+ Setting.byteSizeSetting(\"repositories.fs.chunk_size\", new ByteSizeValue(-1), Property.NodeScope);\n public static final Setting<Boolean> COMPRESS_SETTING = Setting.boolSetting(\"compress\", false, Property.NodeScope);\n public static final Setting<Boolean> REPOSITORIES_COMPRESS_SETTING =\n Setting.boolSetting(\"repositories.fs.compress\", false, Property.NodeScope);", "filename": "core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java", "status": "modified" }, { "diff": "@@ -0,0 +1,88 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.settings;\n+\n+import org.elasticsearch.common.settings.Setting.Property;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.common.util.PageCacheRecycler;\n+import org.elasticsearch.indices.IndexingMemoryController;\n+import org.elasticsearch.indices.IndicesQueryCache;\n+import org.elasticsearch.indices.IndicesRequestCache;\n+import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService;\n+import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache;\n+import org.elasticsearch.monitor.jvm.JvmInfo;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import static org.hamcrest.Matchers.notNullValue;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.hasItem;\n+\n+public class MemorySizeSettingsTests extends ESTestCase {\n+\n+ public void testPageCacheLimitHeapSetting() {\n+ assertMemorySizeSetting(PageCacheRecycler.LIMIT_HEAP_SETTING, \"cache.recycler.page.limit.heap\",\n+ new ByteSizeValue((long) (JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.1)));\n+ }\n+\n+ public void testIndexBufferSizeSetting() {\n+ assertMemorySizeSetting(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"indices.memory.index_buffer_size\",\n+ new ByteSizeValue((long) (JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.1)));\n+ }\n+\n+ public void testQueryCacheSizeSetting() {\n+ assertMemorySizeSetting(IndicesQueryCache.INDICES_CACHE_QUERY_SIZE_SETTING, \"indices.queries.cache.size\",\n+ new ByteSizeValue((long) (JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.1)));\n+ }\n+\n+ public void testIndicesRequestCacheSetting() {\n+ assertMemorySizeSetting(IndicesRequestCache.INDICES_CACHE_QUERY_SIZE, \"indices.requests.cache.size\",\n+ new ByteSizeValue((long) (JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.01)));\n+ }\n+\n+ public void testCircuitBreakerSettings() {\n+ assertMemorySizeSetting(HierarchyCircuitBreakerService.TOTAL_CIRCUIT_BREAKER_LIMIT_SETTING, \"indices.breaker.total.limit\",\n+ new ByteSizeValue((long) (JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.7)));\n+ assertMemorySizeSetting(HierarchyCircuitBreakerService.FIELDDATA_CIRCUIT_BREAKER_LIMIT_SETTING, \"indices.breaker.fielddata.limit\",\n+ new ByteSizeValue((long) (JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.6)));\n+ assertMemorySizeSetting(HierarchyCircuitBreakerService.REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING, \"indices.breaker.request.limit\",\n+ new ByteSizeValue((long) (JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.6)));\n+ assertMemorySizeSetting(HierarchyCircuitBreakerService.IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_LIMIT_SETTING,\n+ \"network.breaker.inflight_requests.limit\", new ByteSizeValue((JvmInfo.jvmInfo().getMem().getHeapMax().bytes())));\n+ }\n+\n+ public void testIndicesFieldDataCacheSetting() {\n+ assertMemorySizeSetting(IndicesFieldDataCache.INDICES_FIELDDATA_CACHE_SIZE_KEY, \"indices.fielddata.cache.size\",\n+ new ByteSizeValue(-1));\n+ }\n+\n+ private void assertMemorySizeSetting(Setting<ByteSizeValue> setting, String settingKey, ByteSizeValue defaultValue) {\n+ assertThat(setting, notNullValue());\n+ assertThat(setting.getKey(), equalTo(settingKey));\n+ assertThat(setting.getProperties(), hasItem(Property.NodeScope));\n+ assertThat(setting.getDefault(Settings.EMPTY),\n+ equalTo(defaultValue));\n+ Settings settingWithPercentage = Settings.builder().put(settingKey, \"25%\").build();\n+ assertThat(setting.get(settingWithPercentage),\n+ equalTo(new ByteSizeValue((long) (JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.25))));\n+ Settings settingWithBytesValue = Settings.builder().put(settingKey, \"1024b\").build();\n+ assertThat(setting.get(settingWithBytesValue), equalTo(new ByteSizeValue(1024)));\n+ }\n+\n+}", "filename": "core/src/test/java/org/elasticsearch/common/settings/MemorySizeSettingsTests.java", "status": "added" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.settings.Setting.Property;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.monitor.jvm.JvmInfo;\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.Arrays;\n@@ -68,6 +69,44 @@ public void testByteSize() {\n assertEquals(new ByteSizeValue(12), value.get());\n }\n \n+ public void testMemorySize() {\n+ Setting<ByteSizeValue> memorySizeValueSetting = Setting.memorySizeSetting(\"a.byte.size\", new ByteSizeValue(1024), Property.Dynamic,\n+ Property.NodeScope);\n+\n+ assertFalse(memorySizeValueSetting.isGroupSetting());\n+ ByteSizeValue memorySizeValue = memorySizeValueSetting.get(Settings.EMPTY);\n+ assertEquals(memorySizeValue.bytes(), 1024);\n+\n+ memorySizeValueSetting = Setting.memorySizeSetting(\"a.byte.size\", s -> \"2048b\", Property.Dynamic, Property.NodeScope);\n+ memorySizeValue = memorySizeValueSetting.get(Settings.EMPTY);\n+ assertEquals(memorySizeValue.bytes(), 2048);\n+\n+ memorySizeValueSetting = Setting.memorySizeSetting(\"a.byte.size\", \"50%\", Property.Dynamic, Property.NodeScope);\n+ assertFalse(memorySizeValueSetting.isGroupSetting());\n+ memorySizeValue = memorySizeValueSetting.get(Settings.EMPTY);\n+ assertEquals(memorySizeValue.bytes(), JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.5, 1.0);\n+\n+ memorySizeValueSetting = Setting.memorySizeSetting(\"a.byte.size\", s -> \"25%\", Property.Dynamic, Property.NodeScope);\n+ memorySizeValue = memorySizeValueSetting.get(Settings.EMPTY);\n+ assertEquals(memorySizeValue.bytes(), JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.25, 1.0);\n+\n+ AtomicReference<ByteSizeValue> value = new AtomicReference<>(null);\n+ ClusterSettings.SettingUpdater<ByteSizeValue> settingUpdater = memorySizeValueSetting.newUpdater(value::set, logger);\n+ try {\n+ settingUpdater.apply(Settings.builder().put(\"a.byte.size\", 12).build(), Settings.EMPTY);\n+ fail(\"no unit\");\n+ } catch (IllegalArgumentException ex) {\n+ assertEquals(\"failed to parse setting [a.byte.size] with value [12] as a size in bytes: unit is missing or unrecognized\",\n+ ex.getMessage());\n+ }\n+\n+ assertTrue(settingUpdater.apply(Settings.builder().put(\"a.byte.size\", \"12b\").build(), Settings.EMPTY));\n+ assertEquals(new ByteSizeValue(12), value.get());\n+\n+ assertTrue(settingUpdater.apply(Settings.builder().put(\"a.byte.size\", \"20%\").build(), Settings.EMPTY));\n+ assertEquals(new ByteSizeValue((int) (JvmInfo.jvmInfo().getMem().getHeapMax().bytes() * 0.2)), value.get());\n+ }\n+\n public void testSimpleUpdate() {\n Setting<Boolean> booleanSetting = Setting.boolSetting(\"foo.bar\", false, Property.Dynamic, Property.NodeScope);\n AtomicReference<Boolean> atomicBoolean = new AtomicReference<>(null);", "filename": "core/src/test/java/org/elasticsearch/common/settings/SettingTests.java", "status": "modified" } ] }
{ "body": "Log4j has a bug where it does not handle a security exception that can\nbe thrown when it is rendering a stack trace. This commit intentionally\nintroduces jar hell with the ThrowableProxy class to work around this\nbug until a fix is a released.\n\nRelates #20304\n", "comments": [ { "body": "LGTM\n", "created_at": "2016-09-02T19:14:40Z" }, { "body": "left one comment LGTM otherwise\n", "created_at": "2016-09-02T19:25:44Z" } ], "number": 20306, "title": "Hack around Log4j bug rendering exceptions" }
{ "body": "We have intentionally introduced leniency for ThrowableProxy from Log4j\nto work around a bug there. Yet, a test for this introduced leniency was\nnot added. This commit introduces such a test.\n\nRelates #20306\n", "number": 20329, "review_comments": [], "title": "Add test for Log4j throwable proxy leniency" }
{ "commits": [ { "message": "Add test for Log4j throwable proxy leniency\n\nWe have intentionally introduced leniency for ThrowableProxy from Log4j\nto work around a bug there. Yet, a test for this introduced leniency was\nnot addded. This commit introduces such a test." }, { "message": "Merge branch 'master' into log4j-throwable-proxy-leniency\n\n* master:\n Remove Joda-Time jar hell exemption\n Add Gradle wrapper to gitignore" } ], "files": [ { "diff": "@@ -117,6 +117,12 @@ public void testLog4jLeniency() throws Exception {\n JarHell.checkJarHell(jars);\n }\n \n+ public void testLog4jThrowableProxyLeniency() throws Exception {\n+ Path dir = createTempDir();\n+ URL[] jars = {makeJar(dir, \"foo.jar\", null, \"org.apache.logging.log4j.core.impl.ThrowableProxy.class\"), makeJar(dir, \"bar.jar\", null, \"org.apache.logging.log4j.core.impl.ThrowableProxy.class\")};\n+ JarHell.checkJarHell(jars);\n+ }\n+\n public void testWithinSingleJar() throws Exception {\n // the java api for zip file does not allow creating duplicate entries (good!) so\n // this bogus jar had to be constructed with ant", "filename": "core/src/test/java/org/elasticsearch/bootstrap/JarHellTests.java", "status": "modified" } ] }
{ "body": "Start two nodes of Elasticsearch, intentionally binding to the same port:\n\n```\n$ bin/elasticsearch -E transport.tcp.port=9300 -E node.max_local_storage_nodes=2\n```\n\nWait for this instance to start, then from another terminal:\n\n```\n$ bin/elasticsearch -E transport.tcp.port=9300 -E node.max_local_storage_nodes=2\n```\n\nThe second node will fail to bind, as expected. However, instead of displaying an already bound exception, Log4j fails in the presence of the security manager and loses the original exception instead producing:\n\n```\n2016-09-02 13:46:33,968 main ERROR An exception occurred processing Appender rolling java.security.AccessControlException: access denied (\"java.lang.RuntimePermission\" \"accessClassInPackage.sun.nio.ch\")\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)\n at java.security.AccessController.checkPermission(AccessController.java:884)\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\n at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1564)\n at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:311)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:357)\n at java.lang.Class.forName0(Native Method)\n at java.lang.Class.forName(Class.java:264)\n at org.apache.logging.log4j.util.LoaderUtil.loadClass(LoaderUtil.java:122)\n at org.apache.logging.log4j.core.util.Loader.loadClass(Loader.java:228)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:496)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:163)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:138)\n at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:117)\n at org.apache.logging.log4j.core.impl.MutableLogEvent.getThrownProxy(MutableLogEvent.java:314)\n at org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:61)\n at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)\n at org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:294)\n at org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:195)\n at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:180)\n at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)\n at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:120)\n at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:113)\n at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:104)\n at org.apache.logging.log4j.core.appender.RollingFileAppender.append(RollingFileAppender.java:86)\n at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:155)\n at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:128)\n at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:119)\n at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)\n at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:390)\n at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:375)\n at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)\n at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:349)\n at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)\n at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)\n at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1988)\n at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1960)\n at org.apache.logging.log4j.spi.AbstractLogger.error(AbstractLogger.java:733)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:281)\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:100)\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:95)\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:88)\n at org.elasticsearch.cli.Command.main(Command.java:54)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:74)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\n```\n\nThis is due to Log4j attempting to load a class that it does not have the permissions to load and this exception goes uncaught.\n\nThis is not the only place that this occurs. Effectively, it can occur anywhere the native-like code is executing (think networking, filesystem access, etc.) and an exception is thrown (there are reports of four different examples of this already, the above is just the simplest reproduction).\n", "comments": [ { "body": "I will submit a patch to Log4j for this. Depending on their release plans, I will add a hack to get around this.\n", "created_at": "2016-09-02T17:50:13Z" }, { "body": "I submitted https://issues.apache.org/jira/browse/LOG4J2-1560 to Log4j.\n", "created_at": "2016-09-02T18:12:38Z" }, { "body": "@jasontedor do we have a workaround for this? Maybe we can remove the blocker label if that is the case?\n", "created_at": "2016-10-07T14:13:37Z" } ], "number": 20304, "title": "Log4j can lose critical exceptions" }
{ "body": "Log4j has a bug where it does not handle a security exception that can\nbe thrown when it is rendering a stack trace. This commit intentionally\nintroduces jar hell with the ThrowableProxy class to work around this\nbug until a fix is a released.\n\nRelates #20304\n", "number": 20306, "review_comments": [ { "body": "can we have a reference to the actual issue in here. It's just easier to reason about this with a direct link?\n", "created_at": "2016-09-02T19:25:36Z" }, { "body": "I pushed 88e91ae9310182cbc0a1384b517a6d25d9b5cd1b.\n", "created_at": "2016-09-02T22:38:23Z" } ], "title": "Hack around Log4j bug rendering exceptions" }
{ "commits": [ { "message": "Prepare for Log4j hack\n\nThis commit copies ThrowableProxy from Log4j and exempts ThrowableProxy\nand its inner classes from jar hell checks. This is to prepare for a\nhack to work around a bug in Log4j." }, { "message": "Hack around Log4j bug rendering exceptions\n\nLog4j has a bug where it does not handle a security exception that can\nbe thrown when it is rendering a stack trace. This commit intentionally\nintroduces jar hell with the ThrowableProxy class to work around this\nbug until a fix is a released." }, { "message": "Add comment for Log4j exception logging bug\n\nThis commit adds a comment to the jar hell exemption for\norg.apache.logging.log4j.core.impl.ThrowableProxy and its inner classes\ndue to an exception logging bug in Log4j." }, { "message": "Fix ThrowableProxy.java path and omit checkstyle\n\nThis commit moves ThrowableProxy.java to a path that corresponds to its\npackage name, and adds checkstyle suppressions since this class violates\nour checkstyle policy." } ], "files": [ { "diff": "@@ -10,6 +10,9 @@\n <suppress files=\"org[/\\\\]elasticsearch[/\\\\]painless[/\\\\]antlr[/\\\\]PainlessLexer\\.java\" checks=\".\" />\n <suppress files=\"org[/\\\\]elasticsearch[/\\\\]painless[/\\\\]antlr[/\\\\]PainlessParser(|BaseVisitor|Visitor)\\.java\" checks=\".\" />\n \n+ <!-- ThrowableProxy is a forked copy from Log4j to hack around a bug; this can be removed when the hack is removed -->\n+ <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]apache[/\\\\]logging[/\\\\]log4j[/\\\\]core[/\\\\]impl[/\\\\]ThrowableProxy.java\" checks=\"RegexpSinglelineJava\" />\n+\n <!-- Hopefully temporary suppression of LineLength on files that don't pass it. We should remove these when we the\n files start to pass. -->\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]apache[/\\\\]lucene[/\\\\]queries[/\\\\]BlendedTermQuery.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -0,0 +1,665 @@\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache license, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the license for the specific language governing permissions and\n+ * limitations under the license.\n+ */\n+\n+package org.apache.logging.log4j.core.impl;\n+\n+import java.io.Serializable;\n+import java.net.URL;\n+import java.security.CodeSource;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.Set;\n+import java.util.Stack;\n+\n+import org.apache.logging.log4j.core.util.Loader;\n+import org.apache.logging.log4j.status.StatusLogger;\n+import org.apache.logging.log4j.util.ReflectionUtil;\n+import org.apache.logging.log4j.util.Strings;\n+\n+/**\n+ * Wraps a Throwable to add packaging information about each stack trace element.\n+ *\n+ * <p>\n+ * A proxy is used to represent a throwable that may not exist in a different class loader or JVM. When an application\n+ * deserializes a ThrowableProxy, the throwable may not be set, but the throwable's information is preserved in other\n+ * fields of the proxy like the message and stack trace.\n+ * </p>\n+ *\n+ * <p>\n+ * TODO: Move this class to org.apache.logging.log4j.core because it is used from LogEvent.\n+ * </p>\n+ * <p>\n+ * TODO: Deserialize: Try to rebuild Throwable if the target exception is in this class loader?\n+ * </p>\n+ */\n+public class ThrowableProxy implements Serializable {\n+\n+ private static final String CAUSED_BY_LABEL = \"Caused by: \";\n+ private static final String SUPPRESSED_LABEL = \"Suppressed: \";\n+ private static final String WRAPPED_BY_LABEL = \"Wrapped by: \";\n+\n+ /**\n+ * Cached StackTracePackageElement and ClassLoader.\n+ * <p>\n+ * Consider this class private.\n+ * </p>\n+ */\n+ static class CacheEntry {\n+ private final ExtendedClassInfo element;\n+ private final ClassLoader loader;\n+\n+ public CacheEntry(final ExtendedClassInfo element, final ClassLoader loader) {\n+ this.element = element;\n+ this.loader = loader;\n+ }\n+ }\n+\n+ private static final ThrowableProxy[] EMPTY_THROWABLE_PROXY_ARRAY = new ThrowableProxy[0];\n+\n+ private static final char EOL = '\\n';\n+\n+ private static final long serialVersionUID = -2752771578252251910L;\n+\n+ private final ThrowableProxy causeProxy;\n+\n+ private int commonElementCount;\n+\n+ private final ExtendedStackTraceElement[] extendedStackTrace;\n+\n+ private final String localizedMessage;\n+\n+ private final String message;\n+\n+ private final String name;\n+\n+ private final ThrowableProxy[] suppressedProxies;\n+\n+ private final transient Throwable throwable;\n+\n+ /**\n+ * For JSON and XML IO via Jackson.\n+ */\n+ @SuppressWarnings(\"unused\")\n+ private ThrowableProxy() {\n+ this.throwable = null;\n+ this.name = null;\n+ this.extendedStackTrace = null;\n+ this.causeProxy = null;\n+ this.message = null;\n+ this.localizedMessage = null;\n+ this.suppressedProxies = EMPTY_THROWABLE_PROXY_ARRAY;\n+ }\n+\n+ /**\n+ * Constructs the wrapper for the Throwable that includes packaging data.\n+ *\n+ * @param throwable\n+ * The Throwable to wrap, must not be null.\n+ */\n+ public ThrowableProxy(final Throwable throwable) {\n+ this(throwable, null);\n+ }\n+\n+ /**\n+ * Constructs the wrapper for the Throwable that includes packaging data.\n+ *\n+ * @param throwable\n+ * The Throwable to wrap, must not be null.\n+ * @param visited\n+ * The set of visited suppressed exceptions.\n+ */\n+ private ThrowableProxy(final Throwable throwable, final Set<Throwable> visited) {\n+ this.throwable = throwable;\n+ this.name = throwable.getClass().getName();\n+ this.message = throwable.getMessage();\n+ this.localizedMessage = throwable.getLocalizedMessage();\n+ final Map<String, CacheEntry> map = new HashMap<>();\n+ final Stack<Class<?>> stack = ReflectionUtil.getCurrentStackTrace();\n+ this.extendedStackTrace = this.toExtendedStackTrace(stack, map, null, throwable.getStackTrace());\n+ final Throwable throwableCause = throwable.getCause();\n+ final Set<Throwable> causeVisited = new HashSet<>(1);\n+ this.causeProxy = throwableCause == null ? null : new ThrowableProxy(throwable, stack, map, throwableCause, visited, causeVisited);\n+ this.suppressedProxies = this.toSuppressedProxies(throwable, visited);\n+ }\n+\n+ /**\n+ * Constructs the wrapper for a Throwable that is referenced as the cause by another Throwable.\n+ *\n+ * @param parent\n+ * The Throwable referencing this Throwable.\n+ * @param stack\n+ * The Class stack.\n+ * @param map\n+ * The cache containing the packaging data.\n+ * @param cause\n+ * The Throwable to wrap.\n+ * @param suppressedVisited TODO\n+ * @param causeVisited TODO\n+ */\n+ private ThrowableProxy(final Throwable parent, final Stack<Class<?>> stack, final Map<String, CacheEntry> map,\n+ final Throwable cause, final Set<Throwable> suppressedVisited, final Set<Throwable> causeVisited) {\n+ causeVisited.add(cause);\n+ this.throwable = cause;\n+ this.name = cause.getClass().getName();\n+ this.message = this.throwable.getMessage();\n+ this.localizedMessage = this.throwable.getLocalizedMessage();\n+ this.extendedStackTrace = this.toExtendedStackTrace(stack, map, parent.getStackTrace(), cause.getStackTrace());\n+ final Throwable causeCause = cause.getCause();\n+ this.causeProxy = causeCause == null || causeVisited.contains(causeCause) ? null : new ThrowableProxy(parent,\n+ stack, map, causeCause, suppressedVisited, causeVisited);\n+ this.suppressedProxies = this.toSuppressedProxies(cause, suppressedVisited);\n+ }\n+\n+ @Override\n+ public boolean equals(final Object obj) {\n+ if (this == obj) {\n+ return true;\n+ }\n+ if (obj == null) {\n+ return false;\n+ }\n+ if (this.getClass() != obj.getClass()) {\n+ return false;\n+ }\n+ final ThrowableProxy other = (ThrowableProxy) obj;\n+ if (this.causeProxy == null) {\n+ if (other.causeProxy != null) {\n+ return false;\n+ }\n+ } else if (!this.causeProxy.equals(other.causeProxy)) {\n+ return false;\n+ }\n+ if (this.commonElementCount != other.commonElementCount) {\n+ return false;\n+ }\n+ if (this.name == null) {\n+ if (other.name != null) {\n+ return false;\n+ }\n+ } else if (!this.name.equals(other.name)) {\n+ return false;\n+ }\n+ if (!Arrays.equals(this.extendedStackTrace, other.extendedStackTrace)) {\n+ return false;\n+ }\n+ if (!Arrays.equals(this.suppressedProxies, other.suppressedProxies)) {\n+ return false;\n+ }\n+ return true;\n+ }\n+\n+ private void formatCause(final StringBuilder sb, final String prefix, final ThrowableProxy cause, final List<String> ignorePackages) {\n+ formatThrowableProxy(sb, prefix, CAUSED_BY_LABEL, cause, ignorePackages);\n+ }\n+\n+ private void formatThrowableProxy(final StringBuilder sb, final String prefix, final String causeLabel,\n+ final ThrowableProxy throwableProxy, final List<String> ignorePackages) {\n+ if (throwableProxy == null) {\n+ return;\n+ }\n+ sb.append(prefix).append(causeLabel).append(throwableProxy).append(EOL);\n+ this.formatElements(sb, prefix, throwableProxy.commonElementCount,\n+ throwableProxy.getStackTrace(), throwableProxy.extendedStackTrace, ignorePackages);\n+ this.formatSuppressed(sb, prefix + \"\\t\", throwableProxy.suppressedProxies, ignorePackages);\n+ this.formatCause(sb, prefix, throwableProxy.causeProxy, ignorePackages);\n+ }\n+\n+ private void formatSuppressed(final StringBuilder sb, final String prefix, final ThrowableProxy[] suppressedProxies,\n+ final List<String> ignorePackages) {\n+ if (suppressedProxies == null) {\n+ return;\n+ }\n+ for (final ThrowableProxy suppressedProxy : suppressedProxies) {\n+ final ThrowableProxy cause = suppressedProxy;\n+ formatThrowableProxy(sb, prefix, SUPPRESSED_LABEL, cause, ignorePackages);\n+ }\n+ }\n+\n+ private void formatElements(final StringBuilder sb, final String prefix, final int commonCount,\n+ final StackTraceElement[] causedTrace, final ExtendedStackTraceElement[] extStackTrace,\n+ final List<String> ignorePackages) {\n+ if (ignorePackages == null || ignorePackages.isEmpty()) {\n+ for (final ExtendedStackTraceElement element : extStackTrace) {\n+ this.formatEntry(element, sb, prefix);\n+ }\n+ } else {\n+ int count = 0;\n+ for (int i = 0; i < extStackTrace.length; ++i) {\n+ if (!this.ignoreElement(causedTrace[i], ignorePackages)) {\n+ if (count > 0) {\n+ appendSuppressedCount(sb, prefix, count);\n+ count = 0;\n+ }\n+ this.formatEntry(extStackTrace[i], sb, prefix);\n+ } else {\n+ ++count;\n+ }\n+ }\n+ if (count > 0) {\n+ appendSuppressedCount(sb, prefix, count);\n+ }\n+ }\n+ if (commonCount != 0) {\n+ sb.append(prefix).append(\"\\t... \").append(commonCount).append(\" more\").append(EOL);\n+ }\n+ }\n+\n+ private void appendSuppressedCount(final StringBuilder sb, final String prefix, final int count) {\n+ sb.append(prefix);\n+ if (count == 1) {\n+ sb.append(\"\\t....\").append(EOL);\n+ } else {\n+ sb.append(\"\\t... suppressed \").append(count).append(\" lines\").append(EOL);\n+ }\n+ }\n+\n+ private void formatEntry(final ExtendedStackTraceElement extStackTraceElement, final StringBuilder sb, final String prefix) {\n+ sb.append(prefix);\n+ sb.append(\"\\tat \");\n+ sb.append(extStackTraceElement);\n+ sb.append(EOL);\n+ }\n+\n+ /**\n+ * Formats the specified Throwable.\n+ *\n+ * @param sb\n+ * StringBuilder to contain the formatted Throwable.\n+ * @param cause\n+ * The Throwable to format.\n+ */\n+ public void formatWrapper(final StringBuilder sb, final ThrowableProxy cause) {\n+ this.formatWrapper(sb, cause, null);\n+ }\n+\n+ /**\n+ * Formats the specified Throwable.\n+ *\n+ * @param sb\n+ * StringBuilder to contain the formatted Throwable.\n+ * @param cause\n+ * The Throwable to format.\n+ * @param packages\n+ * The List of packages to be suppressed from the trace.\n+ */\n+ @SuppressWarnings(\"ThrowableResultOfMethodCallIgnored\")\n+ public void formatWrapper(final StringBuilder sb, final ThrowableProxy cause, final List<String> packages) {\n+ final Throwable caused = cause.getCauseProxy() != null ? cause.getCauseProxy().getThrowable() : null;\n+ if (caused != null) {\n+ this.formatWrapper(sb, cause.causeProxy);\n+ sb.append(WRAPPED_BY_LABEL);\n+ }\n+ sb.append(cause).append(EOL);\n+ this.formatElements(sb, \"\", cause.commonElementCount,\n+ cause.getThrowable().getStackTrace(), cause.extendedStackTrace, packages);\n+ }\n+\n+ public ThrowableProxy getCauseProxy() {\n+ return this.causeProxy;\n+ }\n+\n+ /**\n+ * Format the Throwable that is the cause of this Throwable.\n+ *\n+ * @return The formatted Throwable that caused this Throwable.\n+ */\n+ public String getCauseStackTraceAsString() {\n+ return this.getCauseStackTraceAsString(null);\n+ }\n+\n+ /**\n+ * Format the Throwable that is the cause of this Throwable.\n+ *\n+ * @param packages\n+ * The List of packages to be suppressed from the trace.\n+ * @return The formatted Throwable that caused this Throwable.\n+ */\n+ public String getCauseStackTraceAsString(final List<String> packages) {\n+ final StringBuilder sb = new StringBuilder();\n+ if (this.causeProxy != null) {\n+ this.formatWrapper(sb, this.causeProxy);\n+ sb.append(WRAPPED_BY_LABEL);\n+ }\n+ sb.append(this.toString());\n+ sb.append(EOL);\n+ this.formatElements(sb, \"\", 0, this.throwable.getStackTrace(), this.extendedStackTrace, packages);\n+ return sb.toString();\n+ }\n+\n+ /**\n+ * Return the number of elements that are being omitted because they are common with the parent Throwable's stack\n+ * trace.\n+ *\n+ * @return The number of elements omitted from the stack trace.\n+ */\n+ public int getCommonElementCount() {\n+ return this.commonElementCount;\n+ }\n+\n+ /**\n+ * Gets the stack trace including packaging information.\n+ *\n+ * @return The stack trace including packaging information.\n+ */\n+ public ExtendedStackTraceElement[] getExtendedStackTrace() {\n+ return this.extendedStackTrace;\n+ }\n+\n+ /**\n+ * Format the stack trace including packaging information.\n+ *\n+ * @return The formatted stack trace including packaging information.\n+ */\n+ public String getExtendedStackTraceAsString() {\n+ return this.getExtendedStackTraceAsString(null);\n+ }\n+\n+ /**\n+ * Format the stack trace including packaging information.\n+ *\n+ * @param ignorePackages\n+ * List of packages to be ignored in the trace.\n+ * @return The formatted stack trace including packaging information.\n+ */\n+ public String getExtendedStackTraceAsString(final List<String> ignorePackages) {\n+ final StringBuilder sb = new StringBuilder(this.name);\n+ final String msg = this.message;\n+ if (msg != null) {\n+ sb.append(\": \").append(msg);\n+ }\n+ sb.append(EOL);\n+ final StackTraceElement[] causedTrace = this.throwable != null ? this.throwable.getStackTrace() : null;\n+ this.formatElements(sb, \"\", 0, causedTrace, this.extendedStackTrace, ignorePackages);\n+ this.formatSuppressed(sb, \"\\t\", this.suppressedProxies, ignorePackages);\n+ this.formatCause(sb, \"\", this.causeProxy, ignorePackages);\n+ return sb.toString();\n+ }\n+\n+ public String getLocalizedMessage() {\n+ return this.localizedMessage;\n+ }\n+\n+ public String getMessage() {\n+ return this.message;\n+ }\n+\n+ /**\n+ * Return the FQCN of the Throwable.\n+ *\n+ * @return The FQCN of the Throwable.\n+ */\n+ public String getName() {\n+ return this.name;\n+ }\n+\n+ public StackTraceElement[] getStackTrace() {\n+ return this.throwable == null ? null : this.throwable.getStackTrace();\n+ }\n+\n+ /**\n+ * Gets proxies for suppressed exceptions.\n+ *\n+ * @return proxies for suppressed exceptions.\n+ */\n+ public ThrowableProxy[] getSuppressedProxies() {\n+ return this.suppressedProxies;\n+ }\n+\n+ /**\n+ * Format the suppressed Throwables.\n+ *\n+ * @return The formatted suppressed Throwables.\n+ */\n+ public String getSuppressedStackTrace() {\n+ final ThrowableProxy[] suppressed = this.getSuppressedProxies();\n+ if (suppressed == null || suppressed.length == 0) {\n+ return Strings.EMPTY;\n+ }\n+ final StringBuilder sb = new StringBuilder(\"Suppressed Stack Trace Elements:\").append(EOL);\n+ for (final ThrowableProxy proxy : suppressed) {\n+ sb.append(proxy.getExtendedStackTraceAsString());\n+ }\n+ return sb.toString();\n+ }\n+\n+ /**\n+ * The throwable or null if this object is deserialized from XML or JSON.\n+ *\n+ * @return The throwable or null if this object is deserialized from XML or JSON.\n+ */\n+ public Throwable getThrowable() {\n+ return this.throwable;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ final int prime = 31;\n+ int result = 1;\n+ result = prime * result + (this.causeProxy == null ? 0 : this.causeProxy.hashCode());\n+ result = prime * result + this.commonElementCount;\n+ result = prime * result + (this.extendedStackTrace == null ? 0 : Arrays.hashCode(this.extendedStackTrace));\n+ result = prime * result + (this.suppressedProxies == null ? 0 : Arrays.hashCode(this.suppressedProxies));\n+ result = prime * result + (this.name == null ? 0 : this.name.hashCode());\n+ return result;\n+ }\n+\n+ private boolean ignoreElement(final StackTraceElement element, final List<String> ignorePackages) {\n+ final String className = element.getClassName();\n+ for (final String pkg : ignorePackages) {\n+ if (className.startsWith(pkg)) {\n+ return true;\n+ }\n+ }\n+ return false;\n+ }\n+\n+ /**\n+ * Loads classes not located via Reflection.getCallerClass.\n+ *\n+ * @param lastLoader\n+ * The ClassLoader that loaded the Class that called this Class.\n+ * @param className\n+ * The name of the Class.\n+ * @return The Class object for the Class or null if it could not be located.\n+ */\n+ private Class<?> loadClass(final ClassLoader lastLoader, final String className) {\n+ // XXX: this is overly complicated\n+ Class<?> clazz;\n+ if (lastLoader != null) {\n+ try {\n+ clazz = Loader.initializeClass(className, lastLoader);\n+ if (clazz != null) {\n+ return clazz;\n+ }\n+ } catch (final Throwable ignore) {\n+ // Ignore exception.\n+ }\n+ }\n+ try {\n+ clazz = Loader.loadClass(className);\n+ } catch (final ClassNotFoundException ignored) {\n+ return initializeClass(className);\n+ } catch (final NoClassDefFoundError ignored) {\n+ return initializeClass(className);\n+ } catch (final SecurityException ignored) {\n+ return initializeClass(className);\n+ }\n+ return clazz;\n+ }\n+\n+ private Class<?> initializeClass(final String className) {\n+ try {\n+ return Loader.initializeClass(className, this.getClass().getClassLoader());\n+ } catch (final ClassNotFoundException ignore) {\n+ return null;\n+ } catch (final NoClassDefFoundError ignore) {\n+ return null;\n+ } catch (final SecurityException ignore) {\n+ return null;\n+ }\n+ }\n+\n+ /**\n+ * Construct the CacheEntry from the Class's information.\n+ *\n+ * @param stackTraceElement\n+ * The stack trace element\n+ * @param callerClass\n+ * The Class.\n+ * @param exact\n+ * True if the class was obtained via Reflection.getCallerClass.\n+ *\n+ * @return The CacheEntry.\n+ */\n+ private CacheEntry toCacheEntry(final StackTraceElement stackTraceElement, final Class<?> callerClass,\n+ final boolean exact) {\n+ String location = \"?\";\n+ String version = \"?\";\n+ ClassLoader lastLoader = null;\n+ if (callerClass != null) {\n+ try {\n+ final CodeSource source = callerClass.getProtectionDomain().getCodeSource();\n+ if (source != null) {\n+ final URL locationURL = source.getLocation();\n+ if (locationURL != null) {\n+ final String str = locationURL.toString().replace('\\\\', '/');\n+ int index = str.lastIndexOf(\"/\");\n+ if (index >= 0 && index == str.length() - 1) {\n+ index = str.lastIndexOf(\"/\", index - 1);\n+ location = str.substring(index + 1);\n+ } else {\n+ location = str.substring(index + 1);\n+ }\n+ }\n+ }\n+ } catch (final Exception ex) {\n+ // Ignore the exception.\n+ }\n+ final Package pkg = callerClass.getPackage();\n+ if (pkg != null) {\n+ final String ver = pkg.getImplementationVersion();\n+ if (ver != null) {\n+ version = ver;\n+ }\n+ }\n+ lastLoader = callerClass.getClassLoader();\n+ }\n+ return new CacheEntry(new ExtendedClassInfo(exact, location, version), lastLoader);\n+ }\n+\n+ /**\n+ * Resolve all the stack entries in this stack trace that are not common with the parent.\n+ *\n+ * @param stack\n+ * The callers Class stack.\n+ * @param map\n+ * The cache of CacheEntry objects.\n+ * @param rootTrace\n+ * The first stack trace resolve or null.\n+ * @param stackTrace\n+ * The stack trace being resolved.\n+ * @return The StackTracePackageElement array.\n+ */\n+ ExtendedStackTraceElement[] toExtendedStackTrace(final Stack<Class<?>> stack, final Map<String, CacheEntry> map,\n+ final StackTraceElement[] rootTrace, final StackTraceElement[] stackTrace) {\n+ int stackLength;\n+ if (rootTrace != null) {\n+ int rootIndex = rootTrace.length - 1;\n+ int stackIndex = stackTrace.length - 1;\n+ while (rootIndex >= 0 && stackIndex >= 0 && rootTrace[rootIndex].equals(stackTrace[stackIndex])) {\n+ --rootIndex;\n+ --stackIndex;\n+ }\n+ this.commonElementCount = stackTrace.length - 1 - stackIndex;\n+ stackLength = stackIndex + 1;\n+ } else {\n+ this.commonElementCount = 0;\n+ stackLength = stackTrace.length;\n+ }\n+ final ExtendedStackTraceElement[] extStackTrace = new ExtendedStackTraceElement[stackLength];\n+ Class<?> clazz = stack.isEmpty() ? null : stack.peek();\n+ ClassLoader lastLoader = null;\n+ for (int i = stackLength - 1; i >= 0; --i) {\n+ final StackTraceElement stackTraceElement = stackTrace[i];\n+ final String className = stackTraceElement.getClassName();\n+ // The stack returned from getCurrentStack may be missing entries for java.lang.reflect.Method.invoke()\n+ // and its implementation. The Throwable might also contain stack entries that are no longer\n+ // present as those methods have returned.\n+ ExtendedClassInfo extClassInfo;\n+ if (clazz != null && className.equals(clazz.getName())) {\n+ final CacheEntry entry = this.toCacheEntry(stackTraceElement, clazz, true);\n+ extClassInfo = entry.element;\n+ lastLoader = entry.loader;\n+ stack.pop();\n+ clazz = stack.isEmpty() ? null : stack.peek();\n+ } else {\n+ final CacheEntry cacheEntry = map.get(className);\n+ if (cacheEntry != null) {\n+ final CacheEntry entry = cacheEntry;\n+ extClassInfo = entry.element;\n+ if (entry.loader != null) {\n+ lastLoader = entry.loader;\n+ }\n+ } else {\n+ final CacheEntry entry = this.toCacheEntry(stackTraceElement,\n+ this.loadClass(lastLoader, className), false);\n+ extClassInfo = entry.element;\n+ map.put(stackTraceElement.toString(), entry);\n+ if (entry.loader != null) {\n+ lastLoader = entry.loader;\n+ }\n+ }\n+ }\n+ extStackTrace[i] = new ExtendedStackTraceElement(stackTraceElement, extClassInfo);\n+ }\n+ return extStackTrace;\n+ }\n+\n+ @Override\n+ public String toString() {\n+ final String msg = this.message;\n+ return msg != null ? this.name + \": \" + msg : this.name;\n+ }\n+\n+ private ThrowableProxy[] toSuppressedProxies(final Throwable thrown, Set<Throwable> suppressedVisited) {\n+ try {\n+ final Throwable[] suppressed = thrown.getSuppressed();\n+ if (suppressed == null) {\n+ return EMPTY_THROWABLE_PROXY_ARRAY;\n+ }\n+ final List<ThrowableProxy> proxies = new ArrayList<>(suppressed.length);\n+ if (suppressedVisited == null) {\n+ suppressedVisited = new HashSet<>(proxies.size());\n+ }\n+ for (int i = 0; i < suppressed.length; i++) {\n+ final Throwable candidate = suppressed[i];\n+ if (!suppressedVisited.contains(candidate)) {\n+ suppressedVisited.add(candidate);\n+ proxies.add(new ThrowableProxy(candidate, suppressedVisited));\n+ }\n+ }\n+ return proxies.toArray(new ThrowableProxy[proxies.size()]);\n+ } catch (final Exception e) {\n+ StatusLogger.getLogger().error(e);\n+ }\n+ return null;\n+ }\n+}", "filename": "core/src/main/java/org/apache/logging/log4j/core/impl/ThrowableProxy.java", "status": "added" }, { "diff": "@@ -86,7 +86,7 @@ public static void checkJarHell() throws Exception {\n }\n checkJarHell(parseClassPath());\n }\n- \n+\n /**\n * Parses the classpath into an array of URLs\n * @return array of URLs\n@@ -277,6 +277,14 @@ static void checkClass(Map<String,Path> clazzes, String clazz, Path jarpath) {\n if (clazz.equals(\"org.joda.time.base.BaseDateTime\")) {\n return; // apparently this is intentional... clean this up\n }\n+ if (clazz.startsWith(\"org.apache.logging.log4j.core.impl.ThrowableProxy\")) {\n+ /*\n+ * deliberate to hack around a bug in Log4j\n+ * cf. https://github.com/elastic/elasticsearch/issues/20304\n+ * cf. https://issues.apache.org/jira/browse/LOG4J2-1560\n+ */\n+ return;\n+ }\n throw new IllegalStateException(\"jar hell!\" + System.lineSeparator() +\n \"class: \" + clazz + System.lineSeparator() +\n \"jar1: \" + previous + System.lineSeparator() +", "filename": "core/src/main/java/org/elasticsearch/bootstrap/JarHell.java", "status": "modified" } ] }
{ "body": "In mapping, the type of field `name` is `string` . I put a document:\n\n```\n{\n \"name\":123456789123456789123456789123456789123456789123456789\n}\n```\n\nIt's OK. \n\nBut when I search and highlight:\n\n```\n{\n \"query\":{\n \"match_all\":{}\n },\n \"highlight\":{\n \"fields:{\n \"name\":{}\n }\n }\n}\n```\n\nI get a excepton: `ElasticsearchIllegalStateException[No matching token for number_type [BIG_INTEGER]]` .\n\nThe call chain is `AbstractXContentParser#map()` -> `AbstractXContentParser#readMap()` -> `AbstractXContentParser#readValue()` -> `JsonXContentParser#numberType()` .\n", "comments": [ { "body": "I am experiencing this issue, as well.\n", "created_at": "2016-01-20T20:53:44Z" }, { "body": "+1 \ncurl -XPOST 'localhost:9200/test_number/doc/1' -d '{\n \"name\" : \"test document\",\n \"test_int\" : 129813493927432849126469276498264619863246932642169846316469\n}'\n\n{\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse\",\"caused_by\":{\"type\":\"illegal_state_exception\",\"reason\":\"No matching token for number_type [BIG_INTEGER]\"}},\"status\":400}\n", "created_at": "2016-01-26T13:59:24Z" } ], "number": 11508, "title": "ElasticsearchIllegalStateException when field's value is big integer." }
{ "body": "Currently it does not because our parsers do not support big integers/decimals\n(on purpose) but we do not have to ask our parser for the number type, we can\njust ask the jackson parser for a number representation of the value with the\nright type.\n\nNote that I did not add similar tests for big decimals because Jackson seems to\nnever return big decimals, even for decimal values that are out of the range of\nvalues that can be represented by doubles.\n\nCloses #11508\n", "number": 20278, "review_comments": [], "title": "Source filtering should keep working when the source contains numbers greater than `Long.MAX_VALUE`." }
{ "commits": [ { "message": "Source filtering should keep working when the source contains numbers greater than `Long.MAX_VALUE`. #20278\n\nCurrently it does not because our parsers do not support big integers/decimals\n(on purpose) but we do not have to ask our parser for the number type, we can\njust ask the jackson parser for a number representation of the value with the\nright type.\n\nNote that I did not add similar tests for big decimals because Jackson seems to\nnever return big decimals, even for decimal values that are out of the range of\nvalues that can be represented by doubles.\n\nCloses #11508" } ], "files": [ { "diff": "@@ -300,16 +300,7 @@ static Object readValue(XContentParser parser, MapFactory mapFactory, XContentPa\n } else if (token == XContentParser.Token.VALUE_STRING) {\n return parser.text();\n } else if (token == XContentParser.Token.VALUE_NUMBER) {\n- XContentParser.NumberType numberType = parser.numberType();\n- if (numberType == XContentParser.NumberType.INT) {\n- return parser.intValue();\n- } else if (numberType == XContentParser.NumberType.LONG) {\n- return parser.longValue();\n- } else if (numberType == XContentParser.NumberType.FLOAT) {\n- return parser.floatValue();\n- } else if (numberType == XContentParser.NumberType.DOUBLE) {\n- return parser.doubleValue();\n- }\n+ return parser.numberValue();\n } else if (token == XContentParser.Token.VALUE_BOOLEAN) {\n return parser.booleanValue();\n } else if (token == XContentParser.Token.START_OBJECT) {", "filename": "core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java", "status": "modified" }, { "diff": "@@ -19,13 +19,19 @@\n \n package org.elasticsearch.common.xcontent;\n \n+import com.fasterxml.jackson.core.JsonGenerator;\n+\n import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.XContentParser.Token;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.ByteArrayInputStream;\n import java.io.ByteArrayOutputStream;\n import java.io.IOException;\n+import java.math.BigDecimal;\n+import java.math.BigInteger;\n+import java.util.Map;\n \n public abstract class BaseXContentTestCase extends ESTestCase {\n \n@@ -156,4 +162,24 @@ void doTestRawValue(XContent source) throws Exception {\n assertNull(parser.nextToken());\n \n }\n+\n+ protected void doTestBigInteger(JsonGenerator generator, ByteArrayOutputStream os) throws Exception {\n+ // Big integers cannot be handled explicitly, but if some values happen to be big ints,\n+ // we can still call parser.map() and get the bigint value so that eg. source filtering\n+ // keeps working\n+ BigInteger bigInteger = BigInteger.valueOf(Long.MAX_VALUE).add(BigInteger.ONE);\n+ generator.writeStartObject();\n+ generator.writeFieldName(\"foo\");\n+ generator.writeString(\"bar\");\n+ generator.writeFieldName(\"bigint\");\n+ generator.writeNumber(bigInteger);\n+ generator.writeEndObject();\n+ generator.flush();\n+ byte[] serialized = os.toByteArray();\n+\n+ XContentParser parser = xcontentType().xContent().createParser(serialized);\n+ Map<String, Object> map = parser.map();\n+ assertEquals(\"bar\", map.get(\"foo\"));\n+ assertEquals(bigInteger, map.get(\"bigint\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/xcontent/BaseXContentTestCase.java", "status": "modified" }, { "diff": "@@ -19,14 +19,24 @@\n \n package org.elasticsearch.common.xcontent.cbor;\n \n+import com.fasterxml.jackson.core.JsonGenerator;\n+import com.fasterxml.jackson.dataformat.cbor.CBORFactory;\n+\n import org.elasticsearch.common.xcontent.BaseXContentTestCase;\n import org.elasticsearch.common.xcontent.XContentType;\n \n+import java.io.ByteArrayOutputStream;\n+\n public class CborXContentTests extends BaseXContentTestCase {\n \n @Override\n public XContentType xcontentType() {\n return XContentType.CBOR;\n }\n \n+ public void testBigInteger() throws Exception {\n+ ByteArrayOutputStream os = new ByteArrayOutputStream();\n+ JsonGenerator generator = new CBORFactory().createGenerator(os);\n+ doTestBigInteger(generator, os);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/xcontent/cbor/CborXContentTests.java", "status": "modified" }, { "diff": "@@ -19,14 +19,24 @@\n \n package org.elasticsearch.common.xcontent.json;\n \n+import com.fasterxml.jackson.core.JsonFactory;\n+import com.fasterxml.jackson.core.JsonGenerator;\n+\n import org.elasticsearch.common.xcontent.BaseXContentTestCase;\n import org.elasticsearch.common.xcontent.XContentType;\n \n+import java.io.ByteArrayOutputStream;\n+\n public class JsonXContentTests extends BaseXContentTestCase {\n \n @Override\n public XContentType xcontentType() {\n return XContentType.JSON;\n }\n \n+ public void testBigInteger() throws Exception {\n+ ByteArrayOutputStream os = new ByteArrayOutputStream();\n+ JsonGenerator generator = new JsonFactory().createGenerator(os);\n+ doTestBigInteger(generator, os);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/xcontent/json/JsonXContentTests.java", "status": "modified" }, { "diff": "@@ -19,14 +19,24 @@\n \n package org.elasticsearch.common.xcontent.smile;\n \n+import com.fasterxml.jackson.core.JsonGenerator;\n+import com.fasterxml.jackson.dataformat.smile.SmileFactory;\n+\n import org.elasticsearch.common.xcontent.BaseXContentTestCase;\n import org.elasticsearch.common.xcontent.XContentType;\n \n+import java.io.ByteArrayOutputStream;\n+\n public class SmileXContentTests extends BaseXContentTestCase {\n \n @Override\n public XContentType xcontentType() {\n return XContentType.SMILE;\n }\n \n+ public void testBigInteger() throws Exception {\n+ ByteArrayOutputStream os = new ByteArrayOutputStream();\n+ JsonGenerator generator = new SmileFactory().createGenerator(os);\n+ doTestBigInteger(generator, os);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/xcontent/smile/SmileXContentTests.java", "status": "modified" }, { "diff": "@@ -19,14 +19,24 @@\n \n package org.elasticsearch.common.xcontent.yaml;\n \n+import com.fasterxml.jackson.core.JsonGenerator;\n+import com.fasterxml.jackson.dataformat.yaml.YAMLFactory;\n+\n import org.elasticsearch.common.xcontent.BaseXContentTestCase;\n import org.elasticsearch.common.xcontent.XContentType;\n \n+import java.io.ByteArrayOutputStream;\n+\n public class YamlXContentTests extends BaseXContentTestCase {\n \n @Override\n public XContentType xcontentType() {\n return XContentType.YAML;\n }\n \n+ public void testBigInteger() throws Exception {\n+ ByteArrayOutputStream os = new ByteArrayOutputStream();\n+ JsonGenerator generator = new YAMLFactory().createGenerator(os);\n+ doTestBigInteger(generator, os);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/xcontent/yaml/YamlXContentTests.java", "status": "modified" }, { "diff": "@@ -1,11 +1,22 @@\n ---\n setup:\n+ - do:\n+ indices.create:\n+ index: test\n+ body:\n+ mappings:\n+ test:\n+ properties:\n+ bigint:\n+ type: keyword\n+\n+\n - do:\n index:\n index: test_1\n type: test\n id: 1\n- body: { \"include\": { \"field1\": \"v1\", \"field2\": \"v2\" }, \"count\": 1 }\n+ body: { \"include\": { \"field1\": \"v1\", \"field2\": \"v2\" }, \"count\": 1, \"bigint\": 72057594037927936 }\n - do:\n indices.refresh: {}\n \n@@ -90,6 +101,17 @@ setup:\n - match: { hits.hits.0._source.include.field1: v1 }\n - is_false: hits.hits.0._source.include.field2\n \n+---\n+\"_source include on bigint\":\n+ - do:\n+ search:\n+ body:\n+ _source:\n+ includes: bigint\n+ query: { match_all: {} }\n+ - match: { hits.hits.0._source.bigint: 72057594037927936 }\n+ - is_false: hits.hits.0._source.include.field2\n+\n ---\n \"fields in body\":\n - do:", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/10_source_filtering.yaml", "status": "modified" } ] }
{ "body": "Elasticsearch version: 2.3.5\nGetting unexpected errors from some queries with filters. These queries worked with Elasticsearch version 1.7.5, but fail on version 2.3.5. Below are steps to reproduce.\n1. Create an index\n\n```\nPUT testindex\n{\n \"mappings\" : {\n \"type1\" : {\n \"properties\" : {\n \"field1\" : { \"type\" : \"string\", \"index\" : \"not_analyzed\" }\n }\n }\n }\n}\n```\n1. Issue the following query:\n\n```\nPOST\n{\n \"query\" : {\n \"match_all\" : {}\n },\n \"post_filter\" : {\n \"not\" : {\n \"and\" : [\n {\n \"query\" : {\n \"term\" : {\n \"field1\" : \"1234\"\n }\n }\n }\n ]\n }\n }\n}\n```\n\nAnd the result is:\n\n```\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"query_parsing_exception\",\n \"reason\" : \"[and] query does not support [field1]\",\n \"index\" : \"testindex\",\n \"line\" : 11,\n \"col\" : 29\n } ],\n \"type\" : \"search_phase_execution_exception\",\n \"reason\" : \"all shards failed\",\n \"phase\" : \"query_fetch\",\n \"grouped\" : true,\n \"failed_shards\" : [ {\n \"shard\" : 0,\n \"index\" : \"testindex\",\n \"node\" : \"0JJWSN2WRwaX1dvaW3yPfA\",\n \"reason\" : {\n \"type\" : \"query_parsing_exception\",\n \"reason\" : \"[and] query does not support [field1]\",\n \"index\" : \"testindex\",\n \"line\" : 11,\n \"col\" : 29\n }\n } ]\n },\n \"status\" : 400\n}\n```\n\nExpected result, as it also works in Elasticsearch version 1.7.5:\n\n```\n{\n \"took\" : 1,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 0,\n \"max_score\" : null,\n \"hits\" : [ ]\n }\n}\n```\n", "comments": [ { "body": "Hi @apidruchny \n\nI can reproduce this failure - something funny in the parsing of the `not` query. That said, `and`, `or`, and `not` are deprecated and have been removed in 5.0. You should rewrite this query to use `bool` instead:\n\n```\nPOST _search\n{\n \"query\": {\n \"match_all\": {}\n },\n \"post_filter\": {\n \"bool\": {\n \"must_not\": [\n {\n \"term\": {\n \"field1\": \"1234\"\n }\n }\n ]\n }\n }\n}\n```\n\nBy the way, use `post_filter` ONLY if you want to include documents in your aggregation that you then want to exclude from the returned `hits`. Using it as a generic filtering mechanism is very inefficient\n", "created_at": "2016-08-25T14:18:06Z" }, { "body": "Yes, I know that boolean filters are deprecated, and post_filter is inefficient. This is just a distilled example. In reality our query is a much more complicated filtered query (no post_filter) and we are trying to migrate from version 1.7.5 to 2.3.5. Could this issue please be re-opened? Even being deprecated, the \"not\" filter should still work, to allow legacy applications to migrate to the current version of Elasticsearch. And this is exactly what we are trying to do.\n", "created_at": "2016-08-25T14:34:40Z" }, { "body": "Sure I can reopen it, but we're unlikely to fix deprecated functionality unless somebody sends a PR. Why migrate to `not` when you're just going to have to repeat the migration for the next version anyway?\n", "created_at": "2016-08-25T14:36:57Z" }, { "body": "Thanks. We are not migrating to `not`, we are migrating to Elasticsearch 2.3.5 and the `not` filter is already used in a legacy application. Sure, it will be necessary to replace the `not` filter eventually, but we would prefer to not have to do it now when we are still on version 1.7.5.\n", "created_at": "2016-08-25T15:16:00Z" }, { "body": "There is a very easy workaround (adding \"filter\" element under \"not\"). I am going to close this issue. Thanks for looking at this issue. Following works:\n\n```\nPOST _search\n{\n \"query\" : {\n \"match_all\" : {}\n },\n \"post_filter\" : {\n \"not\" : {\n \"filter\" : {\n \"and\" : [\n {\n \"query\" : {\n \"term\" : {\n \"field1\" : \"1234\"\n }\n }\n }\n ]\n }\n }\n }\n}\n```\n", "created_at": "2016-08-30T18:31:53Z" } ], "number": 20161, "title": "Some boolean filters result in errors like \"[and] query does not support [field1]\"" }
{ "body": "This is required as queries are used as keys in the filter cache. Currently\r\nall AllTermQuery instances are considered equals.\r\n\r\nCloses #20161\r\nCloses #13662\r\n", "number": 20196, "review_comments": [ { "body": "Maybe rewrite this using `assertNotEquals`? It makes it symmetric with the previous line so that it's clearer, and it gives better failure messages from JUnit when the assertion fails.\n", "created_at": "2016-08-28T14:55:22Z" }, { "body": "Same suggestion here for `assertNotEquals`.\n", "created_at": "2016-08-28T14:55:32Z" } ], "title": "AllTermQuery must implement equals/hashCode." }
{ "commits": [ { "message": "AllTermQuery must implement equals/hashCode. #20196\n\nThis is required as queries are used as keys in the filter cache. Currently\nall AllTermQuery instances are considered equals." } ], "files": [ { "diff": "@@ -227,4 +227,17 @@ public String toString(String field) {\n return new TermQuery(term).toString(field) + ToStringUtils.boost(getBoost());\n }\n \n+ @Override\n+ public boolean equals(Object obj) {\n+ if (super.equals(obj) == false) {\n+ return false;\n+ }\n+ AllTermQuery that = (AllTermQuery) obj;\n+ return term.equals(that.term);\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return 31 * super.hashCode() + term.hashCode();\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java", "status": "modified" }, { "diff": "@@ -374,4 +374,13 @@ public void testNoTokensWithKeywordAnalyzer() throws Exception {\n assertThat(docs.totalHits, equalTo(1));\n assertThat(docs.scoreDocs[0].doc, equalTo(0));\n }\n+\n+ public void testEquals() {\n+ Term bar = new Term(\"foo\", \"bar\");\n+ Term baz = new Term(\"foo\", \"baz\");\n+ assertEquals(new AllTermQuery(bar), new AllTermQuery(bar));\n+ assertNotEquals(new AllTermQuery(bar), new AllTermQuery(baz));\n+ assertEquals(new AllTermQuery(bar).hashCode(), new AllTermQuery(bar).hashCode());\n+ assertNotEquals(new AllTermQuery(bar).hashCode(), new AllTermQuery(baz).hashCode());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/lucene/all/SimpleAllTests.java", "status": "modified" } ] }
{ "body": "I have an array with categories in which i have genres like for example Comedy. I search for documents that contain that and other genres using filtered query in which i have empty query and terms filter. \nNext i use match query on for example title field + terms filter for genres and choose Comedy. On third query i do exact search like first one so only look for Comedy in genres (only using terms filter and empty query) and i get results of last match search (so second query results) which are obviously wrong.\n", "comments": [ { "body": "@devlo thanks so much for opening this, can you reproduce this in a simple self contained example?\n", "created_at": "2015-09-18T20:06:13Z" }, { "body": "As i thought it's a cache problem, i've added _cache: false to terms query (i am using node.js driver) and it's returning good results. I will try to make an example.\nProbably has something to do with #5363\n", "created_at": "2015-09-18T21:25:16Z" }, { "body": "Hi @devlo \n\nThe `_cache` parameter is no longer supported in 2.0 (it is silently ignored) so I doubt it has anything to do with this. A recreation would be very helpful indeed.\n", "created_at": "2015-09-19T13:12:10Z" }, { "body": "This is what I've tried doing to recreate this, but it works as expected:\n\n```\nPOST t/t\n{\n \"title\": \"One foo bar\",\n \"cats\": [\"one\",\"two\",\"three\"]\n}\n\nPOST t/t\n{\n \"title\": \"Two foo bar\",\n \"cats\": [\"one\",\"two\"]\n}\n\nPOST t/t\n{\n \"title\": \"Three foo bar\",\n \"cats\": [\"two\"]\n}\n\nGET _search\n{\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"query_string\": {\n \"query\": \"*\"\n }\n }, \n \"filter\": {\n \"terms\": {\n \"cats\": [\n \"one\",\n \"three\"\n ]\n }\n }\n }\n }\n}\n\nGET _search\n{\n \"query\": {\n \"filtered\": {\n \"query\": {\"match\": {\n \"title\": \"One\"\n }}, \n \"filter\": {\n \"terms\": {\n \"cats\": [\n \"one\",\n \"three\"\n ]\n }\n }\n }\n }\n}\n```\n", "created_at": "2015-09-19T13:17:49Z" }, { "body": "Hmmm i can't reproduce it now, i've restarted elasticsearch after that, used _cache: false and i thought it was the issue. But i've removed _cache: false and tried now and everything works as intented. Indeed very wierd behaviour. It's wierd because i had the same issue on local test environment and on remote test server. I've restarted elastic on both. I will observe this as i am working on 2.0 beta anyway.\n", "created_at": "2015-09-19T21:06:04Z" }, { "body": "Looks like a false alarm. Closing. Feel free to reopen if you can reproduce.\n", "created_at": "2015-10-06T14:19:39Z" }, { "body": "Hi,\n\nI believe I currently have this same issue (or at least related) and have a reproducible case (at least on my machine and our production cluster :D). I can't post the data publicly, but poke me on Twitter @queryable (or let me know where I can send it to) and I can provide it for you (it's small enough at 9 MB). \n\nI've opened a Stackoverflow question on it here:\n\nhttp://stackoverflow.com/questions/39155676/erratic-search-results-from-elastic-when-sorting-on-a-field\n", "created_at": "2016-08-26T13:31:29Z" }, { "body": "@JulianRooze could you email it to me at clinton at elastic dot co?\n", "created_at": "2016-08-26T13:47:19Z" }, { "body": "@clintongormley I've mailed it to you, thanks! \n", "created_at": "2016-08-26T13:58:58Z" }, { "body": "@clintongormley Hi Clinton, I was curious about this issue and if you guys have made any progress on it. Last I heard you managed to reproduce the faulty behavior. This is both speaking out of curiosity what the cause behind this weird behavior is and whether I can strip the workaround from our code yet :)\n\nCheers! \n", "created_at": "2016-10-20T18:06:35Z" }, { "body": "Sorry @JulianRooze - this was fixed in #20196\n", "created_at": "2016-11-02T17:46:02Z" }, { "body": "@clintongormley Great, thanks for letting me know! \n", "created_at": "2016-11-02T19:58:22Z" } ], "number": 13662, "title": "Elastic 2.0 beta returning wrong results from cache?" }
{ "body": "This is required as queries are used as keys in the filter cache. Currently\r\nall AllTermQuery instances are considered equals.\r\n\r\nCloses #20161\r\nCloses #13662\r\n", "number": 20196, "review_comments": [ { "body": "Maybe rewrite this using `assertNotEquals`? It makes it symmetric with the previous line so that it's clearer, and it gives better failure messages from JUnit when the assertion fails.\n", "created_at": "2016-08-28T14:55:22Z" }, { "body": "Same suggestion here for `assertNotEquals`.\n", "created_at": "2016-08-28T14:55:32Z" } ], "title": "AllTermQuery must implement equals/hashCode." }
{ "commits": [ { "message": "AllTermQuery must implement equals/hashCode. #20196\n\nThis is required as queries are used as keys in the filter cache. Currently\nall AllTermQuery instances are considered equals." } ], "files": [ { "diff": "@@ -227,4 +227,17 @@ public String toString(String field) {\n return new TermQuery(term).toString(field) + ToStringUtils.boost(getBoost());\n }\n \n+ @Override\n+ public boolean equals(Object obj) {\n+ if (super.equals(obj) == false) {\n+ return false;\n+ }\n+ AllTermQuery that = (AllTermQuery) obj;\n+ return term.equals(that.term);\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return 31 * super.hashCode() + term.hashCode();\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java", "status": "modified" }, { "diff": "@@ -374,4 +374,13 @@ public void testNoTokensWithKeywordAnalyzer() throws Exception {\n assertThat(docs.totalHits, equalTo(1));\n assertThat(docs.scoreDocs[0].doc, equalTo(0));\n }\n+\n+ public void testEquals() {\n+ Term bar = new Term(\"foo\", \"bar\");\n+ Term baz = new Term(\"foo\", \"baz\");\n+ assertEquals(new AllTermQuery(bar), new AllTermQuery(bar));\n+ assertNotEquals(new AllTermQuery(bar), new AllTermQuery(baz));\n+ assertEquals(new AllTermQuery(bar).hashCode(), new AllTermQuery(bar).hashCode());\n+ assertNotEquals(new AllTermQuery(bar).hashCode(), new AllTermQuery(baz).hashCode());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/lucene/all/SimpleAllTests.java", "status": "modified" } ] }
{ "body": "Create three indexes: foo, bar, and baz.\nSubmit a search request using the URL: http://example.com/ba*,-bar,-baz/_search. This performs a search over all indices.\n\nThe logic of the MetaData.convertFromWildcards() method is as follows:\nFirst token is ba\\* - expand that into bar and baz and add them to the result set.\nSecond token is a negation of bar so it removes that.\nThird token is a negation of baz so it removes that to leave an empty set.\n\nThe problem is that an empty index set is also generated by http://example.com/_search.\n\nThere is later logic in PlainOperationRouting.computeTargetedShards() that treats the empty set as a request to search every index, including the index foo and the two indices that have been explicitly removed.\n\nThe proposed solution is:\n1. RestSearchAction.parseSearchRequest() to detect when request.param(\"index\") is null and to explicitly populate the indices array with \"_all\". Both /_all/_search and /_search then follow the exact same logical path - this may avoid future bugs.\n2. PlainOperationRouting.computeTargetedShards() should treat an empty concreteIndices array as a null search.\n", "comments": [ { "body": "There's a related issue.\n\nI'm looking at implementing an authorization model by appending a list of indexes or aliases that the user shouldn't see to the user request. So, if the user requests an index that they shouldn't then the logic strips it out again. Alternate solutions are welcome.\n\nWithin that context, a request for http://example.com/foo,-foo/_search makes sense. Unfortunately, this doesn't work.\n\nWhen MetaData.convertFromWildcards() encounters a -foo, it will either:\na) remove it from the list of fields iff a previous term has caused an expansion\nb) add all other fields iff it is the first term\nc) add it as is, including the minus sign\n\nSo, foo,-foo returns an exception saying that it can't find an index called -foo.\n\nInstead, I would propose that a non-first minus term will populate the results set with all previous entries encountered. This logic is also used when processing a wildcard expression to resolve the expression \"foo,ba*\".\n", "created_at": "2013-10-09T14:20:15Z" }, { "body": "The first issue has been fixed, the issue from the second comment is still valid:\n\n```\nGET /foo,-foo/_search\n```\n", "created_at": "2014-08-08T17:49:41Z" }, { "body": "And, this searches all indices including bar and baz:\n\n```\nGET *,-bar,-baz/_search\n```\n\nWhile this search all indices except bar and baz:\n\n```\nGET -bar,-baz/_search\n```\n", "created_at": "2015-09-21T18:52:05Z" }, { "body": "I was able to reproduce this on latest master with the following:\n\n```\nPUT foo/foo-type/foo-doc\n{}\nPUT bar/bar-type/bar-doc\n{}\nPUT baz/baz-type/baz-doc\n{}\nGET foo,-foo/_search\n\nresponse:\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"index_not_found_exception\",\n \"reason\": \"no such index\",\n \"resource.type\": \"index_expression\",\n \"resource.id\": \"-foo\",\n \"index_uuid\": \"_na_\",\n \"index\": \"-foo\"\n }\n ],\n \"type\": \"index_not_found_exception\",\n \"reason\": \"no such index\",\n \"resource.type\": \"index_expression\",\n \"resource.id\": \"-foo\",\n \"index_uuid\": \"_na_\",\n \"index\": \"-foo\"\n },\n \"status\": 404\n}\n```\n\nTook a quick look at the code... l'll make a PR shortly with the fix.\nUnable to reproduce bug with `GET *,-bar,-baz/_search` and `GET -bar,-baz/_search`.\n\n@clintongormley While I was investigating this I noticed the following request resulted in a similar error:\n\n```\nGET foo,+foo/_search\n\nresponse:\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"index_not_found_exception\",\n \"reason\": \"no such index\",\n \"resource.type\": \"index_or_alias\",\n \"resource.id\": \" foo\",\n \"index_uuid\": \"_na_\",\n \"index\": \" foo\"\n }\n ],\n \"type\": \"index_not_found_exception\",\n \"reason\": \"no such index\",\n \"resource.type\": \"index_or_alias\",\n \"resource.id\": \" foo\",\n \"index_uuid\": \"_na_\",\n \"index\": \" foo\"\n },\n \"status\": 404\n}\n```\n\nHowever this error is slightly different than with `GET foo,-foo/_search`: it has `\"resource.type\": \"index_or_alias\"` instead of `\"resource.type\": \"index_expression\"`. Also notice `\"index\": \" foo\"` has a space before `foo`. I guessed this might be something to do with URL encoding and tried the following:\n\n```\nGET foo,%2Bfoo/_search\n\nresponse:\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"index_not_found_exception\",\n \"reason\": \"no such index\",\n \"resource.type\": \"index_expression\",\n \"resource.id\": \"+foo\",\n \"index_uuid\": \"_na_\",\n \"index\": \"+foo\"\n }\n ],\n \"type\": \"index_not_found_exception\",\n \"reason\": \"no such index\",\n \"resource.type\": \"index_expression\",\n \"resource.id\": \"+foo\",\n \"index_uuid\": \"_na_\",\n \"index\": \"+foo\"\n },\n \"status\": 404\n}\n```\n\nThis is the same error as with `GET foo,-foo/_search` so that confirmed my suspicion. Is this a bug with ES or kibana/sense? Got the same behavior when I tried this with curl and an ES 2.3 cluster.\n", "created_at": "2016-08-26T22:43:25Z" }, { "body": "Would the indices be resolved differently if the `+` was removed? If not what is the purpose of it?\n`GET foo,+foo/_search` vs `GET foo,foo/_search`\n`GET foo,+ba*/_search` vs `GET foo,ba*/_search`\n`GET +foo/_search` vs `GET foo/_search`\n`GET ba*,+f*,-foo/_search` vs `GET ba*,f*,-foo/_search`\n\nI tried removing all the `+`s from `WildcardExpressionResolverTests` and the tests still pass so I guess its not needed?\n", "created_at": "2016-08-26T22:51:39Z" }, { "body": "There still seems to be a problem here. When I run the following query it won't work. But if I switch the position of the indices it works. \r\nnon working: `GET /-foo,*/_search`\r\nworking: `GET /*,-foo/_search`\r\n\r\nIt seems the - sign is not interpreted correctly as in following example.\r\nNot working: `GET /-foo,+bar/_search`\r\nworking: `GET /-*foo,+bar/_search`\r\n\r\nThis is for version 5.0.2", "created_at": "2016-12-07T19:13:39Z" }, { "body": "Hi @gerits yes that is correct. It's a feature, not a bug, see #20898. The negation has an effect only if there is a preceding wildcard expression. The implicit negation that we had before led to unexpected results, especially when e.g. deleting indices.", "created_at": "2016-12-07T19:50:14Z" } ], "number": 3839, "title": "http://example.com/ba*,-bar,-baz/_search will search over index foo" }
{ "body": "Fix IndexNotFoundException if an multi index search request had a concrete index followed by an add/remove concrete index.\n\nThe code now properly adds/removes the index instead of throwing an exception.\n\nCloses #3839\n", "number": 20188, "review_comments": [], "title": "Fix IndexNotFoundException in multi index search request." }
{ "commits": [ { "message": "Fix IndexNotFoundException if an multi index search request had a concrete index followed by an add/remove concrete index.\nThe code now properly adds/removes the index instead of throwing an exception.\n\nCloses #3839" } ], "files": [ { "diff": "@@ -607,23 +607,21 @@ private Set<String> innerResolve(Context context, List<String> expressions, Indi\n add = false;\n expression = expression.substring(1);\n }\n+ if (result == null) {\n+ // add all the previous ones...\n+ result = new HashSet<>(expressions.subList(0, i));\n+ }\n if (!Regex.isSimpleMatchPattern(expression)) {\n if (!unavailableIgnoredOrExists(options, metaData, expression)) {\n throw infe(expression);\n }\n- if (result != null) {\n- if (add) {\n- result.add(expression);\n- } else {\n- result.remove(expression);\n- }\n+ if (add) {\n+ result.add(expression);\n+ } else {\n+ result.remove(expression);\n }\n continue;\n }\n- if (result == null) {\n- // add all the previous ones...\n- result = new HashSet<>(expressions.subList(0, i));\n- }\n \n final IndexMetaData.State excludeState = excludeState(options);\n final Map<String, AliasOrIndex> matches = matches(metaData, expression);", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java", "status": "modified" }, { "diff": "@@ -49,6 +49,10 @@ public void testConvertWildcardsJustIndicesTests() {\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testX*\", \"kuku\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"kuku\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"*\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\", \"kuku\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"*\", \"-kuku\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));\n+ assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"+testYYY\"))), equalTo(newHashSet(\"testXXX\", \"testYYY\")));\n+ assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"-testXXX\"))).size(), equalTo(0));\n+ assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"+testY*\"))), equalTo(newHashSet(\"testXXX\", \"testYYY\")));\n+ assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testXXX\", \"-testX*\"))).size(), equalTo(0));\n }\n \n public void testConvertWildcardsTests() {", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/WildcardExpressionResolverTests.java", "status": "modified" } ] }
{ "body": "Starting elasticsearch as follows:\n\n```\nbin/elasticsearch -Escript.inline=false\n```\n\nshould disable all inline scripting, but it doesn't:\n\n```\nGET /_cluster/settings?include_defaults&filter_path=**.script.engine.*.inline\n```\n\nreturns:\n\n```\n{\n \"defaults\": {\n \"script\": {\n \"engine\": {\n \"painless\": {\n \"inline\": \"true\"\n },\n \"expression\": {\n \"inline\": \"true\"\n },\n \"groovy\": {\n \"inline\": \"false\"\n },\n \"mustache\": {\n \"inline\": \"true\"\n }\n }\n }\n }\n}\n```\n\nFirst reported in https://github.com/elastic/kibana/issues/6529#issuecomment-224675359\n", "comments": [ { "body": "update: the setting seems to work, it's just the default setting that is reported differently. This is curious as most node settings seem to change the default value, which seems to be different here\n", "created_at": "2016-08-25T12:30:30Z" } ], "number": 20159, "title": "Setting `script.inline: false` doesn't disable all inline scripting" }
{ "body": "Fixes an issue where the value for the `script.engine.<lang>.inline`\nsettings would be _set_ properly, but would not accurately be reflected\nin the `include_defaults` output. Adds a test to ensure the default raw\nsetting is now correct.\n\nResolves #20159\n", "number": 20183, "review_comments": [ { "body": "`// fine-grained e.g. script.engine.groovy.inline`?\n", "created_at": "2016-08-26T18:56:15Z" } ], "title": "Fix propagating the default value for script settings" }
{ "commits": [ { "message": "Fix propagating the default value for script settings\n\nFixes an issue where the value for the `script.engine.<lang>.inline`\nsettings would be _set_ properly, but would not accurately be reflected\nin the `include_defaults` output. Adds a test to ensure the default raw\nsetting is now correct.\n\nResolves #20159" } ], "files": [ { "diff": "@@ -97,9 +97,25 @@ private static List<Setting<Boolean>> languageSettings(Map<ScriptService.ScriptT\n }\n final boolean defaultIfNothingSet = defaultLangAndType;\n \n+ Function<Settings, String> defaultLangAndTypeFn = settings -> {\n+ final Setting<Boolean> globalTypeSetting = scriptTypeSettingMap.get(scriptType);\n+ final Setting<Boolean> langAndTypeSetting = Setting.boolSetting(ScriptModes.getGlobalKey(language, scriptType),\n+ defaultIfNothingSet, Property.NodeScope);\n+\n+ if (langAndTypeSetting.exists(settings)) {\n+ // fine-grained e.g. script.engine.groovy.inline\n+ return langAndTypeSetting.get(settings).toString();\n+ } else if (globalTypeSetting.exists(settings)) {\n+ // global type - script.inline\n+ return globalTypeSetting.get(settings).toString();\n+ } else {\n+ return Boolean.toString(defaultIfNothingSet);\n+ }\n+ };\n+\n // Setting for something like \"script.engine.groovy.inline\"\n final Setting<Boolean> langAndTypeSetting = Setting.boolSetting(ScriptModes.getGlobalKey(language, scriptType),\n- defaultLangAndType, Property.NodeScope);\n+ defaultLangAndTypeFn, Property.NodeScope);\n scriptModeSettings.add(langAndTypeSetting);\n \n for (ScriptContext scriptContext : scriptContextRegistry.scriptContexts()) {", "filename": "core/src/main/java/org/elasticsearch/script/ScriptSettings.java", "status": "modified" }, { "diff": "@@ -20,11 +20,13 @@\n package org.elasticsearch.script;\n \n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.search.lookup.SearchLookup;\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.Collections;\n+import java.util.Iterator;\n import java.util.Map;\n \n import static org.hamcrest.Matchers.containsString;\n@@ -64,6 +66,22 @@ public void testInvalidDefaultLanguage() {\n }\n }\n \n+ public void testSettingsAreProperlyPropogated() {\n+ ScriptEngineRegistry scriptEngineRegistry =\n+ new ScriptEngineRegistry(Collections.singletonList(new CustomScriptEngineService()));\n+ ScriptContextRegistry scriptContextRegistry = new ScriptContextRegistry(Collections.emptyList());\n+ ScriptSettings scriptSettings = new ScriptSettings(scriptEngineRegistry, scriptContextRegistry);\n+ boolean enabled = randomBoolean();\n+ Settings s = Settings.builder().put(\"script.inline\", enabled).build();\n+ for (Iterator<Setting<Boolean>> iter = scriptSettings.getScriptLanguageSettings().iterator(); iter.hasNext();) {\n+ Setting<Boolean> setting = iter.next();\n+ if (setting.getKey().endsWith(\".inline\")) {\n+ assertThat(\"inline settings should have propagated\", setting.get(s), equalTo(enabled));\n+ assertThat(setting.getDefaultRaw(s), equalTo(Boolean.toString(enabled)));\n+ }\n+ }\n+ }\n+\n private static class CustomScriptEngineService implements ScriptEngineService {\n \n public static final String NAME = \"custom\";", "filename": "core/src/test/java/org/elasticsearch/script/ScriptSettingsTests.java", "status": "modified" } ] }
{ "body": "The `_search` API has replaced `fields` with `stored_fields` in the request, but the response still uses `fields`. The GET API still uses `fields` for both the request and the response.\n\nRelates #18943\n", "comments": [ { "body": "> but the response still uses fields\n\nThe fields in the response are the concatenation of `stored_fields`, `docvalue_fields` and `script_fields` so I think that the naming is correct.\n\n> The GET API still uses fields for both the request and the response.\n\nWe should change the request but not the response. It's also a concatenation of the other types of fields that can be requested.\n\nI've also noticed that the documentation is not correct:\n\n```\nFor backwards compatibility, if the fields parameter specifies fields which are not stored (`store` mapping set to\n`false`), it will load the `_source` and extract it from it. This functionality has been replaced by the\n<<search-request-source-filtering,source filtering>> parameter.\n```\n\nWe don't \"load the `_source` and extract it from it.\" anymore. I'll update the docs.\n", "created_at": "2016-08-25T09:41:41Z" }, { "body": "Changing the `GET` API to take `stored_fields` rather than `fields` sounds good to me!\n", "created_at": "2016-08-25T09:50:20Z" }, { "body": "Historically, `fields` was a way of getting back just the fields you were interested in. I'm thinking that we should do the same as we've done for search: remove `fields` and support `_source`, `stored_fields`, etc...\n", "created_at": "2016-08-25T11:06:49Z" }, { "body": "> I'm thinking that we should do the same as we've done for search: remove fields and support _source, stored_fields, etc...\n\n_source and source filtering are already supported. Though `docvalue_fields` is missing and I can rename `fields` to `stored_fields`.\n", "created_at": "2016-08-25T12:59:05Z" } ], "number": 20155, "title": "Naming inconsistency: field/stored_field" }
{ "body": "This change replaces the fields parameter with stored_fields when it makes sense.\nThis is dictated by the renaming we made in #18943 for the search API.\n\nThe following list of endpoint has been changed to use `stored_fields` instead of `fields`:\n- get\n- mget\n- explain\n\nThe documentation and the rest API spec has been updated to cope with the changes for the following APIs:\n- delete_by_query\n- get\n- mget\n- explain\n\nThe `fields` parameter has been deprecated for the following APIs:\n- update\n- bulk\n\nThese APIs now support `_source` as a parameter to filter the _source of the updated document to be returned. \n\nSome APIs still have the `fields` parameter for various reasons:\n- cat.fielddata: the fields paramaters relates to the fielddata fields that should be printed.\n- indices.clear_cache: used to indicate which fielddata fields should be cleared.\n- indices.get_field_mapping: used to filter fields in the mapping.\n- indices.stats: get stats on fields (stored or not stored).\n- termvectors: fields are retrieved from the stored fields if possible and extracted from the _source otherwise.\n- mtermvectors:\n- nodes.stats: the fields parameter is used to concatenate completion_fields and fielddata_fields so it's not related to stored_fields at all.\n\nFixes #20155\n", "number": 20166, "review_comments": [ { "body": "^ tag:green inconsistent with the example description which talks about tag:blue\n", "created_at": "2016-09-13T09:39:56Z" }, { "body": "Inconsistent with description in get.json - should be:\n\n```\nA comma-separated list of *stored* fields to return in the response\n```\n", "created_at": "2016-09-13T09:48:16Z" }, { "body": "description should say \"list of _stored_ fields\"\n", "created_at": "2016-09-13T09:48:56Z" }, { "body": "I changed the description to mention the green tag, I had to change the logic since we use the same doc for all CONSOLE snippet and it will break the next snippet if the document is removed.\n", "created_at": "2016-09-13T12:25:46Z" }, { "body": "I think this change is incorrect. `query_string` still takes `fields`.\n", "created_at": "2016-09-13T13:33:16Z" }, { "body": "I'd convert this to `// CONSOLE` while I was here, maybe even add an example of what is returned. It'd be nice to have the example while reading the docs and it'd really help to make sure that they are up to date.\n", "created_at": "2016-09-13T13:34:39Z" }, { "body": "Maybe just remove the whole paragraph because we already have the `=== Source filtering` section.\n", "created_at": "2016-09-13T13:36:32Z" }, { "body": "And the `=== Stored Fields` section.\n", "created_at": "2016-09-13T13:36:57Z" }, { "body": "I have no idea what this sentence means.\n", "created_at": "2016-09-13T13:38:03Z" }, { "body": "I'd likely rewrite this whole section to make it more clear what stored fields are. Maybe it makes more sense to just make a page about them and reference it here like I did with `refresh.asciidoc`. Maybe this should come in a separate PR? I dunno.\n", "created_at": "2016-09-13T13:39:08Z" }, { "body": "s/true/false/ I think. The paragraph talks about setting it to `false` and how the default is `true`.\n", "created_at": "2016-09-13T13:42:25Z" }, { "body": "I see the trouble. It might make more sense to move this `// TESTRESPONSE` snippet above the example of disabling `detect_noop`.\n", "created_at": "2016-09-13T13:44:15Z" }, { "body": "That way you can say \"by default updates that don't change anything detect that they don't change anything and return `\"result\": \"noop\"` like this:\n<TESTRESPONSE snippet here>\n\nYou can disable this behavior by setting `\"detect_noop\": false` like this:\n<CONSOLE snippet here>\n", "created_at": "2016-09-13T13:45:36Z" }, { "body": "Leftover?\n", "created_at": "2016-09-13T13:46:10Z" }, { "body": "\"List of fields to exclude from the returned _source field\"\n", "created_at": "2016-09-13T13:47:12Z" }, { "body": "Same change I think? Also, it'd be nice to say if exclude overrides include or the other way around.\n", "created_at": "2016-09-13T13:47:44Z" }, { "body": "Is there a way to mark this as deprecated?\n", "created_at": "2016-09-13T13:48:15Z" }, { "body": "Oh! It is the default list because it can be overridden on each sub-request, right? Maybe mention that....\n", "created_at": "2016-09-13T13:49:06Z" }, { "body": "I think maybe rename the file and the test case? You've changed it so now it is about `_source` instead of `fields`. Maybe add another test case for the filtering?\n", "created_at": "2016-09-13T13:50:32Z" }, { "body": "I'd rename the test case and the file to `stored_fields`?\n", "created_at": "2016-09-13T13:51:46Z" }, { "body": "I don't recall seeing in the docs that `stored_fields` is how you fetch these metadata fields. I probably just missed it, but can you be sure we mention it and have an example?\n", "created_at": "2016-09-13T13:52:50Z" }, { "body": "Rename test case and file?\n", "created_at": "2016-09-13T13:53:46Z" }, { "body": "Someone was very careful with the indentation of this!\n", "created_at": "2016-09-13T13:54:38Z" }, { "body": "Rename test case and file to source_filtering or something.\n", "created_at": "2016-09-13T13:55:22Z" }, { "body": "Can you indent this line once more so it doesn't look like part of the body of the `if` statement?\n", "created_at": "2016-09-13T13:58:00Z" }, { "body": "Thanks for doing this.\n", "created_at": "2016-09-13T13:59:32Z" }, { "body": "I think it'd be more accurate to call it \"unsupported\" rather the \"deprecated\" because it doesn't work any more.\n", "created_at": "2016-09-13T14:00:40Z" }, { "body": "I wonder if we can just move the source parsing into the rest handler or into a static method on the request - do we really need it as part of the java API?\n", "created_at": "2016-09-13T14:10:06Z" }, { "body": "These `Fields` objects have fallen out of favor. In this case I'd go with something like `SourceFieldMapper.NAME` instead.\n", "created_at": "2016-09-13T14:11:36Z" }, { "body": "`ParseFieldMatcher` is probably going to be removed. It doesn't really do its job any more so we're probably just going to pitch it. So don't feel bad about not passing it in and just using `ParseFieldMatcher.STRICT` at the call site.\n", "created_at": "2016-09-13T14:13:27Z" } ], "title": "Fixed naming inconsistency for fields/stored_fields in the APIs" }
{ "commits": [ { "message": "Fixed naming inconsistency for fields/stored_fields in the APIs\n\nThis change replaces the fields parameter with stored_fields when it makes sense.\nThis is dictated by the renaming we made in #18943 for the search API.\n\nThe following list of endpoint has been changed to use `stored_fields` instead of `fields`:\n* get\n* mget\n* explain\n\nThe documentation and the rest API spec has been updated to cope with the changes for the following APIs:\n* delete_by_query\n* get\n* mget\n* explain\n\nSome APIs still have the `fields` parameter for various reasons:\n\n* update: the fields are extracted from the _source directly.\n* bulk: the fields parameter is used but fields are extracted from the source directly so it is allowed to have non-stored fields.\n* cat.fielddata: the fields paramaters relates to the fielddata fields that should be printed.\n* indices.clear_cache: used to indicate which fielddata fields should be cleared.\n* indices.get_field_mapping: used to filter fields in the mapping.\n* indices.stats: get stats on fields (stored or not stored).\n* termvectors: fields are retrieved from the stored fields if possible and extracted from the _source otherwise.\n* mtermvectors:\n* nodes.stats: the fields parameter is used to concatenate completion_fields and fielddata_fields so it's not related to stored_fields at all.\n\nFixes #20155" }, { "message": "Deprecate `fields` in the update/bulk API and add support for _source filtering.\n\nfixes" }, { "message": "Throw exception when fields parameter is used on get, mget and explain request" }, { "message": "Fix docs and rest api spec" }, { "message": "Fix typos, docs and tests" }, { "message": "Add missing CONSOLE and fix snippets" }, { "message": "Filter _source only if includes or excludes contain fields to filter. Fixed docs" } ], "files": [ { "diff": "@@ -72,7 +72,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n }\n bulkRequest.timeout(request.paramAsTime(\"timeout\", BulkShardRequest.DEFAULT_TIMEOUT));\n bulkRequest.setRefreshPolicy(request.param(\"refresh\"));\n- bulkRequest.add(request.content(), defaultIndex, defaultType, defaultRouting, defaultFields, defaultPipeline, null, true);\n+ bulkRequest.add(request.content(), defaultIndex, defaultType, defaultRouting, defaultFields, null, defaultPipeline, null, true);\n \n // short circuit the call to the transport layer\n BulkRestBuilderListener listener = new BulkRestBuilderListener(channel, request);", "filename": "client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java", "status": "modified" }, { "diff": "@@ -293,7 +293,7 @@ public BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nu\n }\n \n public synchronized BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String defaultPipeline, @Nullable Object payload) throws Exception {\n- bulkRequest.add(data, defaultIndex, defaultType, null, null, defaultPipeline, payload, true);\n+ bulkRequest.add(data, defaultIndex, defaultType, null, null, null, defaultPipeline, payload, true);\n executeIfNeeded();\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java", "status": "modified" }, { "diff": "@@ -35,12 +35,15 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.lucene.uid.Versions;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContent;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.VersionType;\n+import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -57,6 +60,8 @@\n * @see org.elasticsearch.client.Client#bulk(BulkRequest)\n */\n public class BulkRequest extends ActionRequest<BulkRequest> implements CompositeIndicesRequest, WriteRequest<BulkRequest> {\n+ private static final DeprecationLogger DEPRECATION_LOGGER =\n+ new DeprecationLogger(Loggers.getLogger(BulkRequest.class));\n \n private static final int REQUEST_OVERHEAD = 50;\n \n@@ -257,17 +262,17 @@ public BulkRequest add(byte[] data, int from, int length, @Nullable String defau\n * Adds a framed data in binary format\n */\n public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType) throws Exception {\n- return add(data, defaultIndex, defaultType, null, null, null, null, true);\n+ return add(data, defaultIndex, defaultType, null, null, null, null, null, true);\n }\n \n /**\n * Adds a framed data in binary format\n */\n public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, boolean allowExplicitIndex) throws Exception {\n- return add(data, defaultIndex, defaultType, null, null, null, null, allowExplicitIndex);\n+ return add(data, defaultIndex, defaultType, null, null, null, null, null, allowExplicitIndex);\n }\n \n- public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String defaultRouting, @Nullable String[] defaultFields, @Nullable String defaultPipeline, @Nullable Object payload, boolean allowExplicitIndex) throws Exception {\n+ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String defaultRouting, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSourceContext, @Nullable String defaultPipeline, @Nullable Object payload, boolean allowExplicitIndex) throws Exception {\n XContent xContent = XContentFactory.xContent(data);\n int line = 0;\n int from = 0;\n@@ -301,6 +306,7 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null\n String id = null;\n String routing = defaultRouting;\n String parent = null;\n+ FetchSourceContext fetchSourceContext = defaultFetchSourceContext;\n String[] fields = defaultFields;\n String timestamp = null;\n TimeValue ttl = null;\n@@ -353,16 +359,21 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null\n pipeline = parser.text();\n } else if (\"fields\".equals(currentFieldName)) {\n throw new IllegalArgumentException(\"Action/metadata line [\" + line + \"] contains a simple value for parameter [fields] while a list is expected\");\n+ } else if (\"_source\".equals(currentFieldName)) {\n+ fetchSourceContext = FetchSourceContext.parse(parser);\n } else {\n throw new IllegalArgumentException(\"Action/metadata line [\" + line + \"] contains an unknown parameter [\" + currentFieldName + \"]\");\n }\n } else if (token == XContentParser.Token.START_ARRAY) {\n if (\"fields\".equals(currentFieldName)) {\n+ DEPRECATION_LOGGER.deprecated(\"Deprecated field [fields] used, expected [_source] instead\");\n List<Object> values = parser.list();\n fields = values.toArray(new String[values.size()]);\n } else {\n throw new IllegalArgumentException(\"Malformed action/metadata line [\" + line + \"], expected a simple value for field [\" + currentFieldName + \"] but found [\" + token + \"]\");\n }\n+ } else if (token == XContentParser.Token.START_OBJECT && \"_source\".equals(currentFieldName)) {\n+ fetchSourceContext = FetchSourceContext.parse(parser);\n } else if (token != XContentParser.Token.VALUE_NULL) {\n throw new IllegalArgumentException(\"Malformed action/metadata line [\" + line + \"], expected a simple value for field [\" + currentFieldName + \"] but found [\" + token + \"]\");\n }\n@@ -402,7 +413,10 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null\n .version(version).versionType(versionType)\n .routing(routing)\n .parent(parent)\n- .source(data.slice(from, nextMarker - from));\n+ .fromXContent(data.slice(from, nextMarker - from));\n+ if (fetchSourceContext != null) {\n+ updateRequest.fetchSource(fetchSourceContext);\n+ }\n if (fields != null) {\n updateRequest.fields(fields);\n }", "filename": "core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java", "status": "modified" }, { "diff": "@@ -251,7 +251,8 @@ private Tuple<Translog.Location, BulkItemRequest> update(IndexMetaData metaData,\n // add the response\n IndexResponse indexResponse = result.getResponse();\n UpdateResponse updateResponse = new UpdateResponse(indexResponse.getShardInfo(), indexResponse.getShardId(), indexResponse.getType(), indexResponse.getId(), indexResponse.getVersion(), indexResponse.getResult());\n- if (updateRequest.fields() != null && updateRequest.fields().length > 0) {\n+ if ((updateRequest.fetchSource() != null && updateRequest.fetchSource().fetchSource()) ||\n+ (updateRequest.fields() != null && updateRequest.fields().length > 0)) {\n Tuple<XContentType, Map<String, Object>> sourceAndContent = XContentHelper.convertToMap(indexSourceAsBytes, true);\n updateResponse.setGetResult(updateHelper.extractGetResult(updateRequest, request.index(), indexResponse.getVersion(), sourceAndContent.v2(), sourceAndContent.v1(), indexSourceAsBytes));\n }", "filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java", "status": "modified" }, { "diff": "@@ -40,7 +40,7 @@ public class ExplainRequest extends SingleShardRequest<ExplainRequest> {\n private String routing;\n private String preference;\n private QueryBuilder query;\n- private String[] fields;\n+ private String[] storedFields;\n private FetchSourceContext fetchSourceContext;\n \n private String[] filteringAlias = Strings.EMPTY_ARRAY;\n@@ -122,12 +122,12 @@ public FetchSourceContext fetchSourceContext() {\n }\n \n \n- public String[] fields() {\n- return fields;\n+ public String[] storedFields() {\n+ return storedFields;\n }\n \n- public ExplainRequest fields(String[] fields) {\n- this.fields = fields;\n+ public ExplainRequest storedFields(String[] fields) {\n+ this.storedFields = fields;\n return this;\n }\n \n@@ -167,8 +167,8 @@ public void readFrom(StreamInput in) throws IOException {\n preference = in.readOptionalString();\n query = in.readNamedWriteable(QueryBuilder.class);\n filteringAlias = in.readStringArray();\n- fields = in.readOptionalStringArray();\n- fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new);\n+ storedFields = in.readOptionalStringArray();\n+ fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new);\n nowInMillis = in.readVLong();\n }\n \n@@ -181,8 +181,8 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeOptionalString(preference);\n out.writeNamedWriteable(query);\n out.writeStringArray(filteringAlias);\n- out.writeOptionalStringArray(fields);\n- out.writeOptionalStreamable(fetchSourceContext);\n+ out.writeOptionalStringArray(storedFields);\n+ out.writeOptionalWriteable(fetchSourceContext);\n out.writeVLong(nowInMillis);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/explain/ExplainRequest.java", "status": "modified" }, { "diff": "@@ -88,10 +88,10 @@ public ExplainRequestBuilder setQuery(QueryBuilder query) {\n }\n \n /**\n- * Explicitly specify the fields that will be returned for the explained document. By default, nothing is returned.\n+ * Explicitly specify the stored fields that will be returned for the explained document. By default, nothing is returned.\n */\n- public ExplainRequestBuilder setFields(String... fields) {\n- request.fields(fields);\n+ public ExplainRequestBuilder setStoredFields(String... fields) {\n+ request.storedFields(fields);\n return this;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/explain/ExplainRequestBuilder.java", "status": "modified" }, { "diff": "@@ -106,12 +106,11 @@ protected ExplainResponse shardOperation(ExplainRequest request, ShardId shardId\n Rescorer rescorer = ctx.rescorer();\n explanation = rescorer.explain(topLevelDocId, context, ctx, explanation);\n }\n- if (request.fields() != null || (request.fetchSourceContext() != null && request.fetchSourceContext().fetchSource())) {\n+ if (request.storedFields() != null || (request.fetchSourceContext() != null && request.fetchSourceContext().fetchSource())) {\n // Advantage is that we're not opening a second searcher to retrieve the _source. Also\n // because we are working in the same searcher in engineGetResult we can be sure that a\n // doc isn't deleted between the initial get and this call.\n- GetResult getResult = context.indexShard().getService().get(result, request.id(), request.type(), request.fields(),\n- request.fetchSourceContext());\n+ GetResult getResult = context.indexShard().getService().get(result, request.id(), request.type(), request.storedFields(), request.fetchSourceContext());\n return new ExplainResponse(shardId.getIndexName(), request.type(), request.id(), true, explanation, getResult);\n } else {\n return new ExplainResponse(shardId.getIndexName(), request.type(), request.id(), true, explanation);", "filename": "core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@ public class GetRequest extends SingleShardRequest<GetRequest> implements Realti\n private String parent;\n private String preference;\n \n- private String[] fields;\n+ private String[] storedFields;\n \n private FetchSourceContext fetchSourceContext;\n \n@@ -186,20 +186,20 @@ public FetchSourceContext fetchSourceContext() {\n }\n \n /**\n- * Explicitly specify the fields that will be returned. By default, the <tt>_source</tt>\n+ * Explicitly specify the stored fields that will be returned. By default, the <tt>_source</tt>\n * field will be returned.\n */\n- public GetRequest fields(String... fields) {\n- this.fields = fields;\n+ public GetRequest storedFields(String... fields) {\n+ this.storedFields = fields;\n return this;\n }\n \n /**\n- * Explicitly specify the fields that will be returned. By default, the <tt>_source</tt>\n+ * Explicitly specify the stored fields that will be returned. By default, the <tt>_source</tt>\n * field will be returned.\n */\n- public String[] fields() {\n- return this.fields;\n+ public String[] storedFields() {\n+ return this.storedFields;\n }\n \n /**\n@@ -260,18 +260,12 @@ public void readFrom(StreamInput in) throws IOException {\n parent = in.readOptionalString();\n preference = in.readOptionalString();\n refresh = in.readBoolean();\n- int size = in.readInt();\n- if (size >= 0) {\n- fields = new String[size];\n- for (int i = 0; i < size; i++) {\n- fields[i] = in.readString();\n- }\n- }\n+ storedFields = in.readOptionalStringArray();\n realtime = in.readBoolean();\n \n this.versionType = VersionType.fromValue(in.readByte());\n this.version = in.readLong();\n- fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new);\n+ fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new);\n }\n \n @Override\n@@ -284,18 +278,11 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeOptionalString(preference);\n \n out.writeBoolean(refresh);\n- if (fields == null) {\n- out.writeInt(-1);\n- } else {\n- out.writeInt(fields.length);\n- for (String field : fields) {\n- out.writeString(field);\n- }\n- }\n+ out.writeOptionalStringArray(storedFields);\n out.writeBoolean(realtime);\n out.writeByte(versionType.getValue());\n out.writeLong(version);\n- out.writeOptionalStreamable(fetchSourceContext);\n+ out.writeOptionalWriteable(fetchSourceContext);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/get/GetRequest.java", "status": "modified" }, { "diff": "@@ -88,8 +88,8 @@ public GetRequestBuilder setPreference(String preference) {\n * Explicitly specify the fields that will be returned. By default, the <tt>_source</tt>\n * field will be returned.\n */\n- public GetRequestBuilder setFields(String... fields) {\n- request.fields(fields);\n+ public GetRequestBuilder setStoredFields(String... fields) {\n+ request.storedFields(fields);\n return this;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java", "status": "modified" }, { "diff": "@@ -134,14 +134,26 @@ public Map<String, Object> getSource() {\n return getResult.getSource();\n }\n \n+ /**\n+ * @deprecated Use {@link GetResponse#getSource()} instead\n+ */\n+ @Deprecated\n public Map<String, GetField> getFields() {\n return getResult.getFields();\n }\n \n+ /**\n+ * @deprecated Use {@link GetResponse#getSource()} instead\n+ */\n+ @Deprecated\n public GetField getField(String name) {\n return getResult.field(name);\n }\n \n+ /**\n+ * @deprecated Use {@link GetResponse#getSource()} instead\n+ */\n+ @Deprecated\n @Override\n public Iterator<GetField> iterator() {\n return getResult.iterator();", "filename": "core/src/main/java/org/elasticsearch/action/get/GetResponse.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.action.ValidateActions;\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -58,7 +59,7 @@ public static class Item implements Streamable, IndicesRequest {\n private String id;\n private String routing;\n private String parent;\n- private String[] fields;\n+ private String[] storedFields;\n private long version = Versions.MATCH_ANY;\n private VersionType versionType = VersionType.INTERNAL;\n private FetchSourceContext fetchSourceContext;\n@@ -136,13 +137,13 @@ public String parent() {\n return parent;\n }\n \n- public Item fields(String... fields) {\n- this.fields = fields;\n+ public Item storedFields(String... fields) {\n+ this.storedFields = fields;\n return this;\n }\n \n- public String[] fields() {\n- return this.fields;\n+ public String[] storedFields() {\n+ return this.storedFields;\n }\n \n public long version() {\n@@ -188,17 +189,11 @@ public void readFrom(StreamInput in) throws IOException {\n id = in.readString();\n routing = in.readOptionalString();\n parent = in.readOptionalString();\n- int size = in.readVInt();\n- if (size > 0) {\n- fields = new String[size];\n- for (int i = 0; i < size; i++) {\n- fields[i] = in.readString();\n- }\n- }\n+ storedFields = in.readOptionalStringArray();\n version = in.readLong();\n versionType = VersionType.fromValue(in.readByte());\n \n- fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new);\n+ fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new);\n }\n \n @Override\n@@ -208,19 +203,11 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeString(id);\n out.writeOptionalString(routing);\n out.writeOptionalString(parent);\n- if (fields == null) {\n- out.writeVInt(0);\n- } else {\n- out.writeVInt(fields.length);\n- for (String field : fields) {\n- out.writeString(field);\n- }\n- }\n-\n+ out.writeOptionalStringArray(storedFields);\n out.writeLong(version);\n out.writeByte(versionType.getValue());\n \n- out.writeOptionalStreamable(fetchSourceContext);\n+ out.writeOptionalWriteable(fetchSourceContext);\n }\n \n @Override\n@@ -233,7 +220,7 @@ public boolean equals(Object o) {\n if (version != item.version) return false;\n if (fetchSourceContext != null ? !fetchSourceContext.equals(item.fetchSourceContext) : item.fetchSourceContext != null)\n return false;\n- if (!Arrays.equals(fields, item.fields)) return false;\n+ if (!Arrays.equals(storedFields, item.storedFields)) return false;\n if (!id.equals(item.id)) return false;\n if (!index.equals(item.index)) return false;\n if (routing != null ? !routing.equals(item.routing) : item.routing != null) return false;\n@@ -251,7 +238,7 @@ public int hashCode() {\n result = 31 * result + id.hashCode();\n result = 31 * result + (routing != null ? routing.hashCode() : 0);\n result = 31 * result + (parent != null ? parent.hashCode() : 0);\n- result = 31 * result + (fields != null ? Arrays.hashCode(fields) : 0);\n+ result = 31 * result + (storedFields != null ? Arrays.hashCode(storedFields) : 0);\n result = 31 * result + Long.hashCode(version);\n result = 31 * result + versionType.hashCode();\n result = 31 * result + (fetchSourceContext != null ? fetchSourceContext.hashCode() : 0);\n@@ -379,7 +366,7 @@ public static void parseDocuments(XContentParser parser, List<Item> items, @Null\n String id = null;\n String routing = defaultRouting;\n String parent = null;\n- List<String> fields = null;\n+ List<String> storedFields = null;\n long version = Versions.MATCH_ANY;\n VersionType versionType = VersionType.INTERNAL;\n \n@@ -403,8 +390,11 @@ public static void parseDocuments(XContentParser parser, List<Item> items, @Null\n } else if (\"_parent\".equals(currentFieldName) || \"parent\".equals(currentFieldName)) {\n parent = parser.text();\n } else if (\"fields\".equals(currentFieldName)) {\n- fields = new ArrayList<>();\n- fields.add(parser.text());\n+ throw new ParsingException(parser.getTokenLocation(),\n+ \"Unsupported field [fields] used, expected [stored_fields] instead\");\n+ } else if (\"stored_fields\".equals(currentFieldName)) {\n+ storedFields = new ArrayList<>();\n+ storedFields.add(parser.text());\n } else if (\"_version\".equals(currentFieldName) || \"version\".equals(currentFieldName)) {\n version = parser.longValue();\n } else if (\"_version_type\".equals(currentFieldName) || \"_versionType\".equals(currentFieldName) || \"version_type\".equals(currentFieldName) || \"versionType\".equals(currentFieldName)) {\n@@ -420,9 +410,12 @@ public static void parseDocuments(XContentParser parser, List<Item> items, @Null\n }\n } else if (token == XContentParser.Token.START_ARRAY) {\n if (\"fields\".equals(currentFieldName)) {\n- fields = new ArrayList<>();\n+ throw new ParsingException(parser.getTokenLocation(),\n+ \"Unsupported field [fields] used, expected [stored_fields] instead\");\n+ } else if (\"stored_fields\".equals(currentFieldName)) {\n+ storedFields = new ArrayList<>();\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n- fields.add(parser.text());\n+ storedFields.add(parser.text());\n }\n } else if (\"_source\".equals(currentFieldName)) {\n ArrayList<String> includes = new ArrayList<>();\n@@ -464,12 +457,12 @@ public static void parseDocuments(XContentParser parser, List<Item> items, @Null\n }\n }\n String[] aFields;\n- if (fields != null) {\n- aFields = fields.toArray(new String[fields.size()]);\n+ if (storedFields != null) {\n+ aFields = storedFields.toArray(new String[storedFields.size()]);\n } else {\n aFields = defaultFields;\n }\n- items.add(new Item(index, type, id).routing(routing).fields(aFields).parent(parent).version(version).versionType(versionType)\n+ items.add(new Item(index, type, id).routing(routing).storedFields(aFields).parent(parent).version(version).versionType(versionType)\n .fetchSourceContext(fetchSourceContext == null ? defaultFetchSource : fetchSourceContext));\n }\n }\n@@ -484,7 +477,7 @@ public static void parseIds(XContentParser parser, List<Item> items, @Nullable S\n if (!token.isValue()) {\n throw new IllegalArgumentException(\"ids array element should only contain ids\");\n }\n- items.add(new Item(defaultIndex, defaultType, parser.text()).fields(defaultFields).fetchSourceContext(defaultFetchSource).routing(defaultRouting));\n+ items.add(new Item(defaultIndex, defaultType, parser.text()).storedFields(defaultFields).fetchSourceContext(defaultFetchSource).routing(defaultRouting));\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java", "status": "modified" }, { "diff": "@@ -92,7 +92,7 @@ protected GetResponse shardOperation(GetRequest request, ShardId shardId) {\n indexShard.refresh(\"refresh_flag_get\");\n }\n \n- GetResult result = indexShard.getService().get(request.type(), request.id(), request.fields(),\n+ GetResult result = indexShard.getService().get(request.type(), request.id(), request.storedFields(),\n request.realtime(), request.version(), request.versionType(), request.fetchSourceContext());\n return new GetResponse(result);\n }", "filename": "core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java", "status": "modified" }, { "diff": "@@ -88,7 +88,7 @@ protected MultiGetShardResponse shardOperation(MultiGetShardRequest request, Sha\n for (int i = 0; i < request.locations.size(); i++) {\n MultiGetRequest.Item item = request.items.get(i);\n try {\n- GetResult getResult = indexShard.getService().get(item.type(), item.id(), item.fields(), request.realtime(), item.version(),\n+ GetResult getResult = indexShard.getService().get(item.type(), item.id(), item.storedFields(), request.realtime(), item.version(),\n item.versionType(), item.fetchSourceContext());\n response.add(request.locations.get(i), new GetResponse(getResult));\n } catch (Exception e) {", "filename": "core/src/main/java/org/elasticsearch/action/get/TransportShardMultiGetAction.java", "status": "modified" }, { "diff": "@@ -180,7 +180,7 @@ public TermVectorsRequest(MultiGetRequest.Item item) {\n super(item.index());\n this.id = item.id();\n this.type = item.type();\n- this.selectedFields(item.fields());\n+ this.selectedFields(item.storedFields());\n this.routing(item.routing());\n this.parent(item.parent());\n }", "filename": "core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java", "status": "modified" }, { "diff": "@@ -186,7 +186,8 @@ protected void shardOperation(final UpdateRequest request, final ActionListener<\n @Override\n public void onResponse(IndexResponse response) {\n UpdateResponse update = new UpdateResponse(response.getShardInfo(), response.getShardId(), response.getType(), response.getId(), response.getVersion(), response.getResult());\n- if (request.fields() != null && request.fields().length > 0) {\n+ if ((request.fetchSource() != null && request.fetchSource().fetchSource()) ||\n+ (request.fields() != null && request.fields().length > 0)) {\n Tuple<XContentType, Map<String, Object>> sourceAndContent = XContentHelper.convertToMap(upsertSourceBytes, true);\n update.setGetResult(updateHelper.extractGetResult(request, request.concreteIndex(), response.getVersion(), sourceAndContent.v2(), sourceAndContent.v1(), upsertSourceBytes));\n } else {", "filename": "core/src/main/java/org/elasticsearch/action/update/TransportUpdateAction.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.update;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.DocWriteResponse;\n import org.elasticsearch.action.delete.DeleteRequest;\n import org.elasticsearch.action.index.IndexRequest;\n@@ -28,9 +29,11 @@\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.index.VersionType;\n@@ -51,6 +54,7 @@\n import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n import org.elasticsearch.search.lookup.SourceLookup;\n \n+import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.HashMap;\n@@ -267,17 +271,19 @@ private TimeValue getTTLFromScriptContext(Map<String, Object> ctx) {\n }\n \n /**\n- * Extracts the fields from the updated document to be returned in a update response\n+ * Applies {@link UpdateRequest#fetchSource()} to the _source of the updated document to be returned in a update response.\n+ * For BWC this function also extracts the {@link UpdateRequest#fields()} from the updated document to be returned in a update response\n */\n public GetResult extractGetResult(final UpdateRequest request, String concreteIndex, long version, final Map<String, Object> source, XContentType sourceContentType, @Nullable final BytesReference sourceAsBytes) {\n- if (request.fields() == null || request.fields().length == 0) {\n+ if ((request.fields() == null || request.fields().length == 0) &&\n+ (request.fetchSource() == null || request.fetchSource().fetchSource() == false)) {\n return null;\n }\n+ SourceLookup sourceLookup = new SourceLookup();\n+ sourceLookup.setSource(source);\n boolean sourceRequested = false;\n Map<String, GetField> fields = null;\n if (request.fields() != null && request.fields().length > 0) {\n- SourceLookup sourceLookup = new SourceLookup();\n- sourceLookup.setSource(source);\n for (String field : request.fields()) {\n if (field.equals(\"_source\")) {\n sourceRequested = true;\n@@ -298,8 +304,26 @@ public GetResult extractGetResult(final UpdateRequest request, String concreteIn\n }\n }\n \n+ BytesReference sourceFilteredAsBytes = sourceAsBytes;\n+ if (request.fetchSource() != null && request.fetchSource().fetchSource()) {\n+ sourceRequested = true;\n+ if (request.fetchSource().includes().length > 0 || request.fetchSource().excludes().length > 0) {\n+ Object value = sourceLookup.filter(request.fetchSource().includes(), request.fetchSource().excludes());\n+ try {\n+ final int initialCapacity = Math.min(1024, sourceAsBytes.length());\n+ BytesStreamOutput streamOutput = new BytesStreamOutput(initialCapacity);\n+ try (XContentBuilder builder = new XContentBuilder(sourceContentType.xContent(), streamOutput)) {\n+ builder.value(value);\n+ sourceFilteredAsBytes = builder.bytes();\n+ }\n+ } catch (IOException e) {\n+ throw new ElasticsearchException(\"Error filtering source\", e);\n+ }\n+ }\n+ }\n+\n // TODO when using delete/none, we can still return the source as bytes by generating it (using the sourceContentType)\n- return new GetResult(concreteIndex, request.type(), request.id(), version, true, sourceRequested ? sourceAsBytes : null, fields);\n+ return new GetResult(concreteIndex, request.type(), request.id(), version, true, sourceRequested ? sourceFilteredAsBytes : null, fields);\n }\n \n public static class Result {", "filename": "core/src/main/java/org/elasticsearch/action/update/UpdateHelper.java", "status": "modified" }, { "diff": "@@ -32,6 +32,8 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.lucene.uid.Versions;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -42,6 +44,7 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.script.ScriptService.ScriptType;\n+import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n \n import java.io.IOException;\n import java.util.Collections;\n@@ -55,6 +58,8 @@\n */\n public class UpdateRequest extends InstanceShardOperationRequest<UpdateRequest>\n implements DocumentRequest<UpdateRequest>, WriteRequest<UpdateRequest> {\n+ private static final DeprecationLogger DEPRECATION_LOGGER =\n+ new DeprecationLogger(Loggers.getLogger(UpdateRequest.class));\n \n private String type;\n private String id;\n@@ -68,6 +73,7 @@ public class UpdateRequest extends InstanceShardOperationRequest<UpdateRequest>\n Script script;\n \n private String[] fields;\n+ private FetchSourceContext fetchSourceContext;\n \n private long version = Versions.MATCH_ANY;\n private VersionType versionType = VersionType.INTERNAL;\n@@ -373,17 +379,80 @@ public UpdateRequest script(String script, @Nullable String scriptLang, ScriptSe\n \n /**\n * Explicitly specify the fields that will be returned. By default, nothing is returned.\n+ * @deprecated Use {@link UpdateRequest#fetchSource(String[], String[])} instead\n */\n+ @Deprecated\n public UpdateRequest fields(String... fields) {\n this.fields = fields;\n return this;\n }\n \n+ /**\n+ * Indicate that _source should be returned with every hit, with an\n+ * \"include\" and/or \"exclude\" set which can include simple wildcard\n+ * elements.\n+ *\n+ * @param include\n+ * An optional include (optionally wildcarded) pattern to filter\n+ * the returned _source\n+ * @param exclude\n+ * An optional exclude (optionally wildcarded) pattern to filter\n+ * the returned _source\n+ */\n+ public UpdateRequest fetchSource(@Nullable String include, @Nullable String exclude) {\n+ this.fetchSourceContext = new FetchSourceContext(include, exclude);\n+ return this;\n+ }\n+\n+ /**\n+ * Indicate that _source should be returned, with an\n+ * \"include\" and/or \"exclude\" set which can include simple wildcard\n+ * elements.\n+ *\n+ * @param includes\n+ * An optional list of include (optionally wildcarded) pattern to\n+ * filter the returned _source\n+ * @param excludes\n+ * An optional list of exclude (optionally wildcarded) pattern to\n+ * filter the returned _source\n+ */\n+ public UpdateRequest fetchSource(@Nullable String[] includes, @Nullable String[] excludes) {\n+ this.fetchSourceContext = new FetchSourceContext(includes, excludes);\n+ return this;\n+ }\n+\n+ /**\n+ * Indicates whether the response should contain the updated _source.\n+ */\n+ public UpdateRequest fetchSource(boolean fetchSource) {\n+ this.fetchSourceContext = new FetchSourceContext(fetchSource);\n+ return this;\n+ }\n+\n+ /**\n+ * Explicitely set the fetch source context for this request\n+ */\n+ public UpdateRequest fetchSource(FetchSourceContext context) {\n+ this.fetchSourceContext = context;\n+ return this;\n+ }\n+\n+\n /**\n * Get the fields to be returned.\n+ * @deprecated Use {@link UpdateRequest#fetchSource()} instead\n */\n+ @Deprecated\n public String[] fields() {\n- return this.fields;\n+ return fields;\n+ }\n+\n+ /**\n+ * Gets the {@link FetchSourceContext} which defines how the _source should\n+ * be fetched.\n+ */\n+ public FetchSourceContext fetchSource() {\n+ return fetchSourceContext;\n }\n \n /**\n@@ -618,16 +687,16 @@ private IndexRequest safeUpsertRequest() {\n return upsertRequest;\n }\n \n- public UpdateRequest source(XContentBuilder source) throws Exception {\n- return source(source.bytes());\n+ public UpdateRequest fromXContent(XContentBuilder source) throws Exception {\n+ return fromXContent(source.bytes());\n }\n \n- public UpdateRequest source(byte[] source) throws Exception {\n- return source(source, 0, source.length);\n+ public UpdateRequest fromXContent(byte[] source) throws Exception {\n+ return fromXContent(source, 0, source.length);\n }\n \n- public UpdateRequest source(byte[] source, int offset, int length) throws Exception {\n- return source(new BytesArray(source, offset, length));\n+ public UpdateRequest fromXContent(byte[] source, int offset, int length) throws Exception {\n+ return fromXContent(new BytesArray(source, offset, length));\n }\n \n /**\n@@ -646,7 +715,7 @@ public boolean detectNoop() {\n return detectNoop;\n }\n \n- public UpdateRequest source(BytesReference source) throws Exception {\n+ public UpdateRequest fromXContent(BytesReference source) throws Exception {\n Script script = null;\n try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) {\n XContentParser.Token token = parser.nextToken();\n@@ -685,6 +754,8 @@ public UpdateRequest source(BytesReference source) throws Exception {\n if (fields != null) {\n fields(fields.toArray(new String[fields.size()]));\n }\n+ } else if (\"_source\".equals(currentFieldName)) {\n+ fetchSourceContext = FetchSourceContext.parse(parser);\n }\n }\n if (script != null) {\n@@ -729,13 +800,8 @@ public void readFrom(StreamInput in) throws IOException {\n doc = new IndexRequest();\n doc.readFrom(in);\n }\n- int size = in.readInt();\n- if (size >= 0) {\n- fields = new String[size];\n- for (int i = 0; i < size; i++) {\n- fields[i] = in.readString();\n- }\n- }\n+ fields = in.readOptionalStringArray();\n+ fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new);\n if (in.readBoolean()) {\n upsertRequest = new IndexRequest();\n upsertRequest.readFrom(in);\n@@ -772,14 +838,8 @@ public void writeTo(StreamOutput out) throws IOException {\n doc.id(id);\n doc.writeTo(out);\n }\n- if (fields == null) {\n- out.writeInt(-1);\n- } else {\n- out.writeInt(fields.length);\n- for (String field : fields) {\n- out.writeString(field);\n- }\n- }\n+ out.writeOptionalStringArray(fields);\n+ out.writeOptionalWriteable(fetchSourceContext);\n if (upsertRequest == null) {\n out.writeBoolean(false);\n } else {", "filename": "core/src/main/java/org/elasticsearch/action/update/UpdateRequest.java", "status": "modified" }, { "diff": "@@ -25,17 +25,22 @@\n import org.elasticsearch.action.support.replication.ReplicationRequest;\n import org.elasticsearch.action.support.single.instance.InstanceShardOperationRequestBuilder;\n import org.elasticsearch.client.ElasticsearchClient;\n-import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.index.VersionType;\n+import org.elasticsearch.rest.action.document.RestUpdateAction;\n import org.elasticsearch.script.Script;\n \n import java.util.Map;\n \n public class UpdateRequestBuilder extends InstanceShardOperationRequestBuilder<UpdateRequest, UpdateResponse, UpdateRequestBuilder>\n implements WriteRequestBuilder<UpdateRequestBuilder> {\n+ private static final DeprecationLogger DEPRECATION_LOGGER =\n+ new DeprecationLogger(Loggers.getLogger(RestUpdateAction.class));\n \n public UpdateRequestBuilder(ElasticsearchClient client, UpdateAction action) {\n super(client, action, new UpdateRequest());\n@@ -90,12 +95,57 @@ public UpdateRequestBuilder setScript(Script script) {\n \n /**\n * Explicitly specify the fields that will be returned. By default, nothing is returned.\n+ * @deprecated Use {@link UpdateRequestBuilder#setFetchSource(String[], String[])} instead\n */\n+ @Deprecated\n public UpdateRequestBuilder setFields(String... fields) {\n+ DEPRECATION_LOGGER.deprecated(\"Deprecated field [fields] used, expected [_source] instead\");\n request.fields(fields);\n return this;\n }\n \n+ /**\n+ * Indicate that _source should be returned with every hit, with an\n+ * \"include\" and/or \"exclude\" set which can include simple wildcard\n+ * elements.\n+ *\n+ * @param include\n+ * An optional include (optionally wildcarded) pattern to filter\n+ * the returned _source\n+ * @param exclude\n+ * An optional exclude (optionally wildcarded) pattern to filter\n+ * the returned _source\n+ */\n+ public UpdateRequestBuilder setFetchSource(@Nullable String include, @Nullable String exclude) {\n+ request.fetchSource(include, exclude);\n+ return this;\n+ }\n+\n+ /**\n+ * Indicate that _source should be returned, with an\n+ * \"include\" and/or \"exclude\" set which can include simple wildcard\n+ * elements.\n+ *\n+ * @param includes\n+ * An optional list of include (optionally wildcarded) pattern to\n+ * filter the returned _source\n+ * @param excludes\n+ * An optional list of exclude (optionally wildcarded) pattern to\n+ * filter the returned _source\n+ */\n+ public UpdateRequestBuilder setFetchSource(@Nullable String[] includes, @Nullable String[] excludes) {\n+ request.fetchSource(includes, excludes);\n+ return this;\n+ }\n+\n+ /**\n+ * Indicates whether the response should contain the updated _source.\n+ */\n+ public UpdateRequestBuilder setFetchSource(boolean fetchSource) {\n+ request.fetchSource(fetchSource);\n+ return this;\n+ }\n+\n /**\n * Sets the number of retries of a version conflict occurs because the document was updated between\n * getting it and updating it. Defaults to 0.\n@@ -279,26 +329,6 @@ public UpdateRequestBuilder setUpsert(Object... source) {\n return this;\n }\n \n- public UpdateRequestBuilder setSource(XContentBuilder source) throws Exception {\n- request.source(source);\n- return this;\n- }\n-\n- public UpdateRequestBuilder setSource(byte[] source) throws Exception {\n- request.source(source);\n- return this;\n- }\n-\n- public UpdateRequestBuilder setSource(byte[] source, int offset, int length) throws Exception {\n- request.source(source, offset, length);\n- return this;\n- }\n-\n- public UpdateRequestBuilder setSource(BytesReference source) throws Exception {\n- request.source(source);\n- return this;\n- }\n-\n /**\n * Sets whether the specified doc parameter should be used as upsert document.\n */", "filename": "core/src/main/java/org/elasticsearch/action/update/UpdateRequestBuilder.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.index.mapper.SourceFieldMapper;\n import org.elasticsearch.search.lookup.SourceLookup;\n \n import java.io.IOException;\n@@ -229,7 +230,7 @@ public XContentBuilder toXContentEmbedded(XContentBuilder builder, Params params\n builder.field(Fields.FOUND, exists);\n \n if (source != null) {\n- XContentHelper.writeRawField(\"_source\", source, builder, params);\n+ XContentHelper.writeRawField(SourceFieldMapper.NAME, source, builder, params);\n }\n \n if (!otherFields.isEmpty()) {", "filename": "core/src/main/java/org/elasticsearch/index/get/GetResult.java", "status": "modified" }, { "diff": "@@ -94,7 +94,7 @@ public final class InnerHitBuilder extends ToXContentToBytes implements Writeabl\n ObjectParser.ValueType.OBJECT_ARRAY);\n PARSER.declareField((p, i, c) -> {\n try {\n- i.setFetchSourceContext(FetchSourceContext.parse(c));\n+ i.setFetchSourceContext(FetchSourceContext.parse(c.parser()));\n } catch (IOException e) {\n throw new ParsingException(p.getTokenLocation(), \"Could not parse inner _source definition\", e);\n }\n@@ -219,7 +219,7 @@ public InnerHitBuilder(StreamInput in) throws IOException {\n scriptFields.add(new ScriptField(in));\n }\n }\n- fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new);\n+ fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new);\n if (in.readBoolean()) {\n int size = in.readVInt();\n sorts = new ArrayList<>(size);\n@@ -258,7 +258,7 @@ public void writeTo(StreamOutput out) throws IOException {\n scriptField.writeTo(out);\n }\n }\n- out.writeOptionalStreamable(fetchSourceContext);\n+ out.writeOptionalWriteable(fetchSourceContext);\n boolean hasSorts = sorts != null;\n out.writeBoolean(hasSorts);\n if (hasSorts) {", "filename": "core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java", "status": "modified" }, { "diff": "@@ -258,8 +258,12 @@ private static Fields generateTermVectors(IndexShard indexShard, Map<String, Obj\n for (Map.Entry<String, Collection<Object>> entry : values.entrySet()) {\n String field = entry.getKey();\n Analyzer analyzer = getAnalyzerAtField(indexShard, field, perFieldAnalyzer);\n- for (Object text : entry.getValue()) {\n- index.addField(field, text.toString(), analyzer);\n+ if (entry.getValue() instanceof List) {\n+ for (Object text : entry.getValue()) {\n+ index.addField(field, text.toString(), analyzer);\n+ }\n+ } else {\n+ index.addField(field, entry.getValue().toString(), analyzer);\n }\n }\n /* and read vectors from it */", "filename": "core/src/main/java/org/elasticsearch/index/termvectors/TermVectorsService.java", "status": "modified" }, { "diff": "@@ -24,10 +24,12 @@\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.bulk.BulkShardRequest;\n import org.elasticsearch.action.support.ActiveShardCount;\n-import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.client.Requests;\n+import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.rest.BaseRestHandler;\n@@ -37,6 +39,7 @@\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.rest.RestResponse;\n import org.elasticsearch.rest.action.RestBuilderListener;\n+import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n \n import static org.elasticsearch.rest.RestRequest.Method.POST;\n import static org.elasticsearch.rest.RestRequest.Method.PUT;\n@@ -52,6 +55,8 @@\n * </pre>\n */\n public class RestBulkAction extends BaseRestHandler {\n+ private static final DeprecationLogger DEPRECATION_LOGGER =\n+ new DeprecationLogger(Loggers.getLogger(RestBulkAction.class));\n \n private final boolean allowExplicitIndex;\n \n@@ -75,18 +80,21 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n String defaultIndex = request.param(\"index\");\n String defaultType = request.param(\"type\");\n String defaultRouting = request.param(\"routing\");\n+ FetchSourceContext defaultFetchSourceContext = FetchSourceContext.parseFromRestRequest(request);\n String fieldsParam = request.param(\"fields\");\n- String defaultPipeline = request.param(\"pipeline\");\n+ if (fieldsParam != null) {\n+ DEPRECATION_LOGGER.deprecated(\"Deprecated field [fields] used, expected [_source] instead\");\n+ }\n String[] defaultFields = fieldsParam != null ? Strings.commaDelimitedListToStringArray(fieldsParam) : null;\n-\n+ String defaultPipeline = request.param(\"pipeline\");\n String waitForActiveShards = request.param(\"wait_for_active_shards\");\n if (waitForActiveShards != null) {\n bulkRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards));\n }\n bulkRequest.timeout(request.paramAsTime(\"timeout\", BulkShardRequest.DEFAULT_TIMEOUT));\n bulkRequest.setRefreshPolicy(request.param(\"refresh\"));\n- bulkRequest.add(request.content(), defaultIndex, defaultType, defaultRouting, defaultFields, defaultPipeline,\n- null, allowExplicitIndex);\n+ bulkRequest.add(request.content(), defaultIndex, defaultType, defaultRouting, defaultFields,\n+ defaultFetchSourceContext, defaultPipeline, null, allowExplicitIndex);\n \n client.bulk(bulkRequest, new RestBuilderListener<BulkResponse>(channel) {\n @Override", "filename": "core/src/main/java/org/elasticsearch/rest/action/document/RestBulkAction.java", "status": "modified" }, { "diff": "@@ -58,12 +58,15 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n getRequest.parent(request.param(\"parent\"));\n getRequest.preference(request.param(\"preference\"));\n getRequest.realtime(request.paramAsBoolean(\"realtime\", getRequest.realtime()));\n-\n- String sField = request.param(\"fields\");\n+ if (request.param(\"fields\") != null) {\n+ throw new IllegalArgumentException(\"The parameter [fields] is no longer supported, \" +\n+ \"please use [stored_fields] to retrieve stored fields or or [_source] to load the field from _source\");\n+ }\n+ String sField = request.param(\"stored_fields\");\n if (sField != null) {\n String[] sFields = Strings.splitStringByCommaToArray(sField);\n if (sFields != null) {\n- getRequest.fields(sFields);\n+ getRequest.storedFields(sFields);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/rest/action/document/RestGetAction.java", "status": "modified" }, { "diff": "@@ -91,7 +91,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n getRequest.preference(request.param(\"preference\"));\n getRequest.realtime(request.paramAsBoolean(\"realtime\", getRequest.realtime()));\n // don't get any fields back...\n- getRequest.fields(Strings.EMPTY_ARRAY);\n+ getRequest.storedFields(Strings.EMPTY_ARRAY);\n // TODO we can also just return the document size as Content-Length\n \n client.get(getRequest, new RestResponseListener<GetResponse>(channel) {", "filename": "core/src/main/java/org/elasticsearch/rest/action/document/RestHeadAction.java", "status": "modified" }, { "diff": "@@ -59,9 +59,12 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n multiGetRequest.refresh(request.paramAsBoolean(\"refresh\", multiGetRequest.refresh()));\n multiGetRequest.preference(request.param(\"preference\"));\n multiGetRequest.realtime(request.paramAsBoolean(\"realtime\", multiGetRequest.realtime()));\n-\n+ if (request.param(\"fields\") != null) {\n+ throw new IllegalArgumentException(\"The parameter [fields] is no longer supported, \" +\n+ \"please use [stored_fields] to retrieve stored fields or _source filtering if the field is not stored\");\n+ }\n String[] sFields = null;\n- String sField = request.param(\"fields\");\n+ String sField = request.param(\"stored_fields\");\n if (sField != null) {\n sFields = Strings.splitStringByCommaToArray(sField);\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/document/RestMultiGetAction.java", "status": "modified" }, { "diff": "@@ -25,6 +25,8 @@\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.rest.BaseRestHandler;\n@@ -33,12 +35,15 @@\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.rest.action.RestActions;\n import org.elasticsearch.rest.action.RestStatusToXContentListener;\n+import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n \n import static org.elasticsearch.rest.RestRequest.Method.POST;\n \n /**\n */\n public class RestUpdateAction extends BaseRestHandler {\n+ private static final DeprecationLogger DEPRECATION_LOGGER =\n+ new DeprecationLogger(Loggers.getLogger(RestUpdateAction.class));\n \n @Inject\n public RestUpdateAction(Settings settings, RestController controller) {\n@@ -58,21 +63,27 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n updateRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards));\n }\n updateRequest.docAsUpsert(request.paramAsBoolean(\"doc_as_upsert\", updateRequest.docAsUpsert()));\n+ FetchSourceContext fetchSourceContext = FetchSourceContext.parseFromRestRequest(request);\n String sField = request.param(\"fields\");\n+ if (sField != null && fetchSourceContext != null) {\n+ throw new IllegalArgumentException(\"[fields] and [_source] cannot be used in the same request\");\n+ }\n if (sField != null) {\n+ DEPRECATION_LOGGER.deprecated(\"Deprecated field [fields] used, expected [_source] instead\");\n String[] sFields = Strings.splitStringByCommaToArray(sField);\n- if (sFields != null) {\n- updateRequest.fields(sFields);\n- }\n+ updateRequest.fields(sFields);\n+ } else if (fetchSourceContext != null) {\n+ updateRequest.fetchSource(fetchSourceContext);\n }\n+\n updateRequest.retryOnConflict(request.paramAsInt(\"retry_on_conflict\", updateRequest.retryOnConflict()));\n updateRequest.version(RestActions.parseVersion(request));\n updateRequest.versionType(VersionType.fromString(request.param(\"version_type\"), updateRequest.versionType()));\n \n \n // see if we have it in the body\n if (request.hasContent()) {\n- updateRequest.source(request.content());\n+ updateRequest.fromXContent(request.content());\n IndexRequest upsertRequest = updateRequest.upsertRequest();\n if (upsertRequest != null) {\n upsertRequest.routing(request.param(\"routing\"));", "filename": "core/src/main/java/org/elasticsearch/rest/action/document/RestUpdateAction.java", "status": "modified" }, { "diff": "@@ -78,11 +78,15 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n explainRequest.query(query);\n }\n \n- String sField = request.param(\"fields\");\n+ if (request.param(\"fields\") != null) {\n+ throw new IllegalArgumentException(\"The parameter [fields] is no longer supported, \" +\n+ \"please use [stored_fields] to retrieve stored fields\");\n+ }\n+ String sField = request.param(\"stored_fields\");\n if (sField != null) {\n String[] sFields = Strings.splitStringByCommaToArray(sField);\n if (sFields != null) {\n- explainRequest.fields(sFields);\n+ explainRequest.storedFields(sFields);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/rest/action/search/RestExplainAction.java", "status": "modified" }, { "diff": "@@ -79,7 +79,7 @@ public TopHitsAggregationBuilder(String name) {\n public TopHitsAggregationBuilder(StreamInput in) throws IOException {\n super(in, TYPE);\n explain = in.readBoolean();\n- fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new);\n+ fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new);\n if (in.readBoolean()) {\n int size = in.readVInt();\n fieldDataFields = new ArrayList<>(size);\n@@ -112,7 +112,7 @@ public TopHitsAggregationBuilder(StreamInput in) throws IOException {\n @Override\n protected void doWriteTo(StreamOutput out) throws IOException {\n out.writeBoolean(explain);\n- out.writeOptionalStreamable(fetchSourceContext);\n+ out.writeOptionalWriteable(fetchSourceContext);\n boolean hasFieldDataFields = fieldDataFields != null;\n out.writeBoolean(hasFieldDataFields);\n if (hasFieldDataFields) {\n@@ -596,7 +596,7 @@ public static TopHitsAggregationBuilder parse(String aggregationName, QueryParse\n } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.TRACK_SCORES_FIELD)) {\n factory.trackScores(parser.booleanValue());\n } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder._SOURCE_FIELD)) {\n- factory.fetchSource(FetchSourceContext.parse(context));\n+ factory.fetchSource(FetchSourceContext.parse(context.parser()));\n } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.STORED_FIELDS_FIELD)) {\n factory.storedFieldsContext =\n StoredFieldsContext.fromXContent(SearchSourceBuilder.STORED_FIELDS_FIELD.getPreferredName(), context);\n@@ -608,7 +608,7 @@ public static TopHitsAggregationBuilder parse(String aggregationName, QueryParse\n }\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder._SOURCE_FIELD)) {\n- factory.fetchSource(FetchSourceContext.parse(context));\n+ factory.fetchSource(FetchSourceContext.parse(context.parser()));\n } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SCRIPT_FIELDS_FIELD)) {\n List<ScriptField> scriptFields = new ArrayList<>();\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n@@ -680,7 +680,7 @@ public static TopHitsAggregationBuilder parse(String aggregationName, QueryParse\n List<SortBuilder<?>> sorts = SortBuilder.fromXContent(context);\n factory.sorts(sorts);\n } else if (context.getParseFieldMatcher().match(currentFieldName, SearchSourceBuilder._SOURCE_FIELD)) {\n- factory.fetchSource(FetchSourceContext.parse(context));\n+ factory.fetchSource(FetchSourceContext.parse(context.parser()));\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"Unknown key for a \" + token + \" in [\" + currentFieldName + \"].\",\n parser.getTokenLocation());", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -187,7 +187,7 @@ public SearchSourceBuilder() {\n public SearchSourceBuilder(StreamInput in) throws IOException {\n aggregations = in.readOptionalWriteable(AggregatorFactories.Builder::new);\n explain = in.readOptionalBoolean();\n- fetchSourceContext = in.readOptionalStreamable(FetchSourceContext::new);\n+ fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new);\n docValueFields = (List<String>) in.readGenericValue();\n storedFieldsContext = in.readOptionalWriteable(StoredFieldsContext::new);\n from = in.readVInt();\n@@ -234,7 +234,7 @@ public SearchSourceBuilder(StreamInput in) throws IOException {\n public void writeTo(StreamOutput out) throws IOException {\n out.writeOptionalWriteable(aggregations);\n out.writeOptionalBoolean(explain);\n- out.writeOptionalStreamable(fetchSourceContext);\n+ out.writeOptionalWriteable(fetchSourceContext);\n out.writeGenericValue(docValueFields);\n out.writeOptionalWriteable(storedFieldsContext);\n out.writeVInt(from);\n@@ -961,7 +961,7 @@ public void parseXContent(QueryParseContext context, AggregatorParsers aggParser\n } else if (context.getParseFieldMatcher().match(currentFieldName, TRACK_SCORES_FIELD)) {\n trackScores = parser.booleanValue();\n } else if (context.getParseFieldMatcher().match(currentFieldName, _SOURCE_FIELD)) {\n- fetchSourceContext = FetchSourceContext.parse(context);\n+ fetchSourceContext = FetchSourceContext.parse(context.parser());\n } else if (context.getParseFieldMatcher().match(currentFieldName, STORED_FIELDS_FIELD)) {\n storedFieldsContext =\n StoredFieldsContext.fromXContent(SearchSourceBuilder.STORED_FIELDS_FIELD.getPreferredName(), context);\n@@ -983,7 +983,7 @@ public void parseXContent(QueryParseContext context, AggregatorParsers aggParser\n } else if (context.getParseFieldMatcher().match(currentFieldName, POST_FILTER_FIELD)) {\n postQueryBuilder = context.parseInnerQueryBuilder().orElse(null);\n } else if (context.getParseFieldMatcher().match(currentFieldName, _SOURCE_FIELD)) {\n- fetchSourceContext = FetchSourceContext.parse(context);\n+ fetchSourceContext = FetchSourceContext.parse(context.parser());\n } else if (context.getParseFieldMatcher().match(currentFieldName, SCRIPT_FIELDS_FIELD)) {\n scriptFields = new ArrayList<>();\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n@@ -1068,7 +1068,7 @@ public void parseXContent(QueryParseContext context, AggregatorParsers aggParser\n }\n }\n } else if (context.getParseFieldMatcher().match(currentFieldName, _SOURCE_FIELD)) {\n- fetchSourceContext = FetchSourceContext.parse(context);\n+ fetchSourceContext = FetchSourceContext.parse(context.parser());\n } else if (context.getParseFieldMatcher().match(currentFieldName, SEARCH_AFTER)) {\n searchAfterBuilder = SearchAfterBuilder.fromXContent(parser, context.getParseFieldMatcher());\n } else if (context.getParseFieldMatcher().match(currentFieldName, FIELDS_FIELD)) {", "filename": "core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java", "status": "modified" }, { "diff": "@@ -21,15 +21,15 @@\n \n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.ParseField;\n+import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.common.io.stream.Streamable;\n+import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.rest.RestRequest;\n \n import java.io.IOException;\n@@ -40,7 +40,7 @@\n /**\n * Context used to fetch the {@code _source}.\n */\n-public class FetchSourceContext implements Streamable, ToXContent {\n+public class FetchSourceContext implements Writeable, ToXContent {\n \n public static final ParseField INCLUDES_FIELD = new ParseField(\"includes\", \"include\");\n public static final ParseField EXCLUDES_FIELD = new ParseField(\"excludes\", \"exclude\");\n@@ -51,9 +51,9 @@ public class FetchSourceContext implements Streamable, ToXContent {\n private String[] includes;\n private String[] excludes;\n \n- public static FetchSourceContext parse(QueryParseContext context) throws IOException {\n+ public static FetchSourceContext parse(XContentParser parser) throws IOException {\n FetchSourceContext fetchSourceContext = new FetchSourceContext();\n- fetchSourceContext.fromXContent(context);\n+ fetchSourceContext.fromXContent(parser, ParseFieldMatcher.STRICT);\n return fetchSourceContext;\n }\n \n@@ -88,6 +88,19 @@ public FetchSourceContext(boolean fetchSource, String[] includes, String[] exclu\n this.excludes = excludes == null ? Strings.EMPTY_ARRAY : excludes;\n }\n \n+ public FetchSourceContext(StreamInput in) throws IOException {\n+ fetchSource = in.readBoolean();\n+ includes = in.readStringArray();\n+ excludes = in.readStringArray();\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ out.writeBoolean(fetchSource);\n+ out.writeStringArray(includes);\n+ out.writeStringArray(excludes);\n+ }\n+\n public boolean fetchSource() {\n return this.fetchSource;\n }\n@@ -148,8 +161,7 @@ public static FetchSourceContext parseFromRestRequest(RestRequest request) {\n return null;\n }\n \n- public void fromXContent(QueryParseContext context) throws IOException {\n- XContentParser parser = context.parser();\n+ public void fromXContent(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException {\n XContentParser.Token token = parser.currentToken();\n boolean fetchSource = true;\n String[] includes = Strings.EMPTY_ARRAY;\n@@ -170,7 +182,7 @@ public void fromXContent(QueryParseContext context) throws IOException {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_ARRAY) {\n- if (context.getParseFieldMatcher().match(currentFieldName, INCLUDES_FIELD)) {\n+ if (parseFieldMatcher.match(currentFieldName, INCLUDES_FIELD)) {\n List<String> includesList = new ArrayList<>();\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n if (token == XContentParser.Token.VALUE_STRING) {\n@@ -181,7 +193,7 @@ public void fromXContent(QueryParseContext context) throws IOException {\n }\n }\n includes = includesList.toArray(new String[includesList.size()]);\n- } else if (context.getParseFieldMatcher().match(currentFieldName, EXCLUDES_FIELD)) {\n+ } else if (parseFieldMatcher.match(currentFieldName, EXCLUDES_FIELD)) {\n List<String> excludesList = new ArrayList<>();\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n if (token == XContentParser.Token.VALUE_STRING) {\n@@ -197,10 +209,13 @@ public void fromXContent(QueryParseContext context) throws IOException {\n + \" in [\" + currentFieldName + \"].\", parser.getTokenLocation());\n }\n } else if (token == XContentParser.Token.VALUE_STRING) {\n- if (context.getParseFieldMatcher().match(currentFieldName, INCLUDES_FIELD)) {\n+ if (parseFieldMatcher.match(currentFieldName, INCLUDES_FIELD)) {\n includes = new String[] {parser.text()};\n- } else if (context.getParseFieldMatcher().match(currentFieldName, EXCLUDES_FIELD)) {\n+ } else if (parseFieldMatcher.match(currentFieldName, EXCLUDES_FIELD)) {\n excludes = new String[] {parser.text()};\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(), \"Unknown key for a \" + token\n+ + \" in [\" + currentFieldName + \"].\", parser.getTokenLocation());\n }\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"Unknown key for a \" + token + \" in [\" + currentFieldName + \"].\",\n@@ -229,22 +244,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n return builder;\n }\n \n- @Override\n- public void readFrom(StreamInput in) throws IOException {\n- fetchSource = in.readBoolean();\n- includes = in.readStringArray();\n- excludes = in.readStringArray();\n- in.readBoolean(); // Used to be transformSource but that was dropped in 2.1\n- }\n-\n- @Override\n- public void writeTo(StreamOutput out) throws IOException {\n- out.writeBoolean(fetchSource);\n- out.writeStringArray(includes);\n- out.writeStringArray(excludes);\n- out.writeBoolean(false); // Used to be transformSource but that was dropped in 2.1\n- }\n-\n @Override\n public boolean equals(Object o) {\n if (this == o) return true;", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceContext.java", "status": "modified" } ] }
{ "body": "`field_names_field` and `routing_field` examples in docs are broken when you add actual refresh calls such that the searches they execute do return something. I found this after working on making realtime get calling refresh which causes the serach to return something. With this patch https://github.com/s1monw/elasticsearch/commit/8faeb1a1e56f45b5c0b00a082270fe880005fd19 2 tests fail since fielddata is not supported on `_routing` and `_field_names` \n\ntry run `gradle :docs:integTest -Dtests.seed=BD8CD8B4F7399000 -Dtests.class=org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT -Dtests.method=\"test {yaml=fields/routing-field/35}\" -Dtests.security.manager=true -Dtests.locale=es-VE -Dtests.timezone=Europe/Minsk\n` or `gradle :docs:integTest -Dtests.seed=BD8CD8B4F7399000 -Dtests.class=org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT -Dtests.method=\"test {yaml=fields/field-names-field/12}\" -Dtests.security.manager=true -Dtests.locale=es-VE -Dtests.timezone=Europe/Minsk`\n", "comments": [ { "body": "this is what I got:\n\n```\n 1> [2016-08-23 12:49:20,947][INFO ][org.elasticsearch.smoketest] Stash dump on failure [{\n 1> \"stash\" : {\n 1> \"body\" : {\n 1> \"took\" : 3,\n 1> \"timed_out\" : false,\n 1> \"_shards\" : {\n 1> \"total\" : 5,\n 1> \"successful\" : 4,\n 1> \"failed\" : 1,\n 1> \"failures\" : [\n 1> {\n 1> \"shard\" : 2,\n 1> \"index\" : \"my_index\",\n 1> \"node\" : \"czqxuPBpQyWwTHQqefWIng\",\n 1> \"reason\" : {\n 1> \"type\" : \"script_exception\",\n 1> \"reason\" : \"runtime error\",\n 1> \"caused_by\" : {\n 1> \"type\" : \"illegal_argument_exception\",\n 1> \"reason\" : \"Fielddata is not supported on field [_routing] of type [_routing]\"\n 1> },\n 1> \"script_stack\" : [\n 1> \"org.elasticsearch.index.mapper.MappedFieldType.fielddataBuilder(MappedFieldType.java:100)\",\n 1> \"org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:111)\",\n 1> \"org.elasticsearch.search.lookup.LeafDocLookup$1.run(LeafDocLookup.java:87)\",\n 1> \"org.elasticsearch.search.lookup.LeafDocLookup$1.run(LeafDocLookup.java:84)\",\n 1> \"java.security.AccessController.doPrivileged(Native Method)\",\n 1> \"org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:84)\",\n 1> \"doc['_routing']\",\n 1> \" ^---- HERE\"\n 1> ],\n 1> \"script\" : \"doc['_routing']\",\n 1> \"lang\" : \"painless\"\n 1> }\n 1> }\n 1> ]\n 1> },\n 1> \"hits\" : {\n 1> \"total\" : 1,\n 1> \"max_score\" : 0.2876821,\n 1> \"hits\" : [ ]\n 1> }\n 1> }\n 1> }\n 1> }]\n```\n\nand \n\n```\n 1> [2016-08-23 12:49:08,806][INFO ][org.elasticsearch.smoketest] Stash dump on failure [{\n 1> \"stash\" : {\n 1> \"body\" : {\n 1> \"took\" : 164,\n 1> \"timed_out\" : false,\n 1> \"_shards\" : {\n 1> \"total\" : 5,\n 1> \"successful\" : 4,\n 1> \"failed\" : 1,\n 1> \"failures\" : [\n 1> {\n 1> \"shard\" : 2,\n 1> \"index\" : \"my_index\",\n 1> \"node\" : \"czqxuPBpQyWwTHQqefWIng\",\n 1> \"reason\" : {\n 1> \"type\" : \"script_exception\",\n 1> \"reason\" : \"runtime error\",\n 1> \"caused_by\" : {\n 1> \"type\" : \"illegal_argument_exception\",\n 1> \"reason\" : \"Fielddata is not supported on field [_field_names] of type [_field_names]\"\n 1> },\n 1> \"script_stack\" : [\n 1> \"org.elasticsearch.index.mapper.MappedFieldType.fielddataBuilder(MappedFieldType.java:100)\",\n 1> \"org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:111)\",\n 1> \"org.elasticsearch.search.lookup.LeafDocLookup$1.run(LeafDocLookup.java:87)\",\n 1> \"org.elasticsearch.search.lookup.LeafDocLookup$1.run(LeafDocLookup.java:84)\",\n 1> \"java.security.AccessController.doPrivileged(Native Method)\",\n 1> \"org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:84)\",\n 1> \"doc['_field_names']\",\n 1> \" ^---- HERE\"\n 1> ],\n 1> \"script\" : \"doc['_field_names']\",\n 1> \"lang\" : \"painless\"\n 1> }\n 1> }\n 1> ]\n 1> },\n 1> \"hits\" : {\n 1> \"total\" : 1,\n 1> \"max_score\" : 0.2876821,\n 1> \"hits\" : [ ]\n 1> }\n 1> }\n 1> }\n 1> }]\n```\n", "created_at": "2016-08-23T10:55:38Z" }, { "body": "my suggestion here is to remove the script examples from the two fields in question @clintongormley WDYT?\n", "created_at": "2016-08-23T11:00:23Z" }, { "body": "+1 I wrote those docs based on experimentation at the time with dummy data - this issue would probably have shown up in practice on a real index anyway\n", "created_at": "2016-08-23T11:00:53Z" } ], "number": 20118, "title": "mapping/fields doc examples are missing refreshes and some of them are broken " }
{ "body": "Fix field examples to make documents actually visible\n\nThis commit adds refresh calls to field examples an removes not working\n`_routing` and `_field_names` script access.\n\nCloses #20118\n\nfix docs\n", "number": 20120, "review_comments": [], "title": "Use `refresh=true` in mapping/fields examples" }
{ "commits": [ { "message": "Use `refresh=true` in mapping/fields examples\nFix field examples to make documents actually visible\n\nThis commit adds refresh calls to field examples an removes not working\n`_routing` and `_field_names` script access.\n\nCloses #20118\n\nfix docs" } ], "files": [ { "diff": "@@ -6,7 +6,7 @@ contains any value other than `null`. This field is used by the\n <<query-dsl-exists-query,`exists`>> query to find documents that\n either have or don't have any non-+null+ value for a particular field.\n \n-The value of the `_field_name` field is accessible in queries and scripts:\n+The value of the `_field_name` field is accessible in queries:\n \n [source,js]\n --------------------------\n@@ -16,7 +16,7 @@ PUT my_index/my_type/1\n \"title\": \"This is a document\"\n }\n \n-PUT my_index/my_type/1\n+PUT my_index/my_type/2?refresh=true\n {\n \"title\": \"This is another document\",\n \"body\": \"This document has a body\"\n@@ -28,19 +28,10 @@ GET my_index/_search\n \"terms\": {\n \"_field_names\": [ \"title\" ] <1>\n }\n- },\n- \"script_fields\": {\n- \"Field names\": {\n- \"script\": {\n- \"lang\": \"painless\",\n- \"inline\": \"doc['_field_names']\" <2>\n- }\n- }\n }\n }\n \n --------------------------\n // CONSOLE\n \n-<1> Querying on the `_field_names` field (also see the <<query-dsl-exists-query,`exists`>> query)\n-<2> Accessing the `_field_names` field in scripts\n+<1> Querying on the `_field_names` field (also see the <<query-dsl-exists-query,`exists`>> query)\n\\ No newline at end of file", "filename": "docs/reference/mapping/fields/field-names-field.asciidoc", "status": "modified" }, { "diff": "@@ -19,7 +19,7 @@ PUT my_index/my_type/1\n \"text\": \"Document with ID 1\"\n }\n \n-PUT my_index/my_type/2\n+PUT my_index/my_type/2&refresh=true\n {\n \"text\": \"Document with ID 2\"\n }", "filename": "docs/reference/mapping/fields/id-field.asciidoc", "status": "modified" }, { "diff": "@@ -21,7 +21,7 @@ PUT index_1/my_type/1\n \"text\": \"Document in index 1\"\n }\n \n-PUT index_2/my_type/2\n+PUT index_2/my_type/2?refresh=true\n {\n \"text\": \"Document in index 2\"\n }", "filename": "docs/reference/mapping/fields/index-field.asciidoc", "status": "modified" }, { "diff": "@@ -28,7 +28,7 @@ PUT my_index/my_child/2?parent=1 <3>\n \"text\": \"This is a child document\"\n }\n \n-PUT my_index/my_child/3?parent=1 <3>\n+PUT my_index/my_child/3?parent=1&refresh=true <3>\n {\n \"text\": \"This is another child document\"\n }", "filename": "docs/reference/mapping/fields/parent-field.asciidoc", "status": "modified" }, { "diff": "@@ -14,7 +14,7 @@ value per document. For instance:\n \n [source,js]\n ------------------------------\n-PUT my_index/my_type/1?routing=user1 <1>\n+PUT my_index/my_type/1?routing=user1&refresh=true <1>\n {\n \"title\": \"This is a document\"\n }\n@@ -29,7 +29,7 @@ GET my_index/my_type/1?routing=user1 <2>\n <<docs-get,getting>>, <<docs-delete,deleting>>, or <<docs-update,updating>>\n the document.\n \n-The value of the `_routing` field is accessible in queries and scripts:\n+The value of the `_routing` field is accessible in queries:\n \n [source,js]\n --------------------------\n@@ -39,22 +39,12 @@ GET my_index/_search\n \"terms\": {\n \"_routing\": [ \"user1\" ] <1>\n }\n- },\n- \"script_fields\": {\n- \"Routing value\": {\n- \"script\": {\n- \"lang\": \"painless\",\n- \"inline\": \"doc['_routing']\" <2>\n- }\n- }\n }\n }\n --------------------------\n // CONSOLE\n \n <1> Querying on the `_routing` field (also see the <<query-dsl-ids-query,`ids` query>>)\n-<2> Accessing the `_routing` field in scripts\n-\n \n ==== Searching with custom routing\n ", "filename": "docs/reference/mapping/fields/routing-field.asciidoc", "status": "modified" }, { "diff": "@@ -16,7 +16,7 @@ PUT my_index/type_1/1\n \"text\": \"Document with type 1\"\n }\n \n-PUT my_index/type_2/2\n+PUT my_index/type_2/2?refresh=true\n {\n \"text\": \"Document with type 2\"\n }", "filename": "docs/reference/mapping/fields/type-field.asciidoc", "status": "modified" }, { "diff": "@@ -16,7 +16,7 @@ PUT my_index/my_type/1\n \"text\": \"Document with ID 1\"\n }\n \n-PUT my_index/my_type/2\n+PUT my_index/my_type/2?refresh=true\n {\n \"text\": \"Document with ID 2\"\n }", "filename": "docs/reference/mapping/fields/uid-field.asciidoc", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.3.5\n**JVM version**:\n\n```\njava version \"1.8.0_65\"\nJava(TM) SE Runtime Environment (build 1.8.0_65-b17)\nJava HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)\n```\n\n**Description of the problem including expected versus actual behavior**:\nadding `date_detection: false` to an existing _type returns `\"acknowledged\": true`, but does not actually get added.\n\n**Steps to reproduce**:\n\n```\nPOST test2\nPUT test2/_mapping/test\n{\n \"properties\": {\n \"date\": {\n \"type\": \"string\"\n }\n }\n}\nPOST test2/test\n{ \"date\": \"2014-01-01\" }\n\nPUT test2/_mapping/test\n{\n \"date_detection\": false\n}\nGET test2/_mapping\n```\n", "comments": [ { "body": "This reproduces on master too, and interestingly it seems that this never worked. I opened a pull request to make it work (#20119), but I'm tempted to make it a 5.0-only change to be safe since this never worked before.\n", "created_at": "2016-08-23T11:11:42Z" }, { "body": "@jpountz I have no pressing need since I only stumbled across this, so 5.0-only is fine by me.\n", "created_at": "2016-08-23T14:56:33Z" } ], "number": 20111, "title": "adding `date_detection: false` to an existing _type returns `\"acknowledged\": true`, but does not get added." }
{ "body": "If they are specified by a mapping update, these properties are currently\nignored. This commit also fixes the handling of `dynamic_templates` so that it\nis possible to remove templates (and so that it works more similarly to all\nother mapping properties).\n\nCloses #20111\n", "number": 20119, "review_comments": [ { "body": "thank you! :)\n", "created_at": "2016-08-24T19:30:09Z" } ], "title": "The root object mapper should support updating `numeric_detection`, `date_detection` and `dynamic_date_formats`." }
{ "commits": [ { "message": "The root object mapper should support updating `numeric_detection`, `date_detection` and `dynamic_date_formats`. #20119\n\nIf they are specified by a mapping update, these properties are currently\nignored. This commit also fixes the handling of `dynamic_templates` so that it\nis possible to remove templates (and so that it works more similarly to all\nother mapping properties).\n\nCloses #20111" } ], "files": [ { "diff": "@@ -168,7 +168,7 @@ protected ObjectMapper createMapper(String name, String fullPath, boolean enable\n public static class TypeParser implements Mapper.TypeParser {\n @Override\n public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n- ObjectMapper.Builder builder = createBuilder(name);\n+ ObjectMapper.Builder builder = new Builder(name);\n parseNested(name, node, builder);\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();\n@@ -300,9 +300,6 @@ protected static void parseProperties(ObjectMapper.Builder objBuilder, Map<Strin\n \n }\n \n- protected Builder createBuilder(String name) {\n- return new Builder(name);\n- }\n }\n \n private final String fullPath;", "filename": "core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.mapper;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.common.Explicit;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n@@ -30,14 +31,13 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n-import java.util.Arrays;\n-import java.util.HashSet;\n+import java.util.Collection;\n+import java.util.Collections;\n import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n-import java.util.Set;\n \n-import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue;\n+import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeBooleanValue;\n import static org.elasticsearch.index.mapper.TypeParsers.parseDateTimeFormatter;\n \n /**\n@@ -57,79 +57,43 @@ public static class Defaults {\n \n public static class Builder extends ObjectMapper.Builder<Builder, RootObjectMapper> {\n \n- protected final List<DynamicTemplate> dynamicTemplates = new ArrayList<>();\n-\n- // we use this to filter out seen date formats, because we might get duplicates during merging\n- protected Set<String> seenDateFormats = new HashSet<>();\n- protected List<FormatDateTimeFormatter> dynamicDateTimeFormatters = new ArrayList<>();\n-\n- protected boolean dateDetection = Defaults.DATE_DETECTION;\n- protected boolean numericDetection = Defaults.NUMERIC_DETECTION;\n+ protected Explicit<DynamicTemplate[]> dynamicTemplates = new Explicit<>(new DynamicTemplate[0], false);\n+ protected Explicit<FormatDateTimeFormatter[]> dynamicDateTimeFormatters = new Explicit<>(Defaults.DYNAMIC_DATE_TIME_FORMATTERS, false);\n+ protected Explicit<Boolean> dateDetection = new Explicit<>(Defaults.DATE_DETECTION, false);\n+ protected Explicit<Boolean> numericDetection = new Explicit<>(Defaults.NUMERIC_DETECTION, false);\n \n public Builder(String name) {\n super(name);\n this.builder = this;\n }\n \n- public Builder noDynamicDateTimeFormatter() {\n- this.dynamicDateTimeFormatters = null;\n- return builder;\n- }\n-\n- public Builder dynamicDateTimeFormatter(Iterable<FormatDateTimeFormatter> dateTimeFormatters) {\n- for (FormatDateTimeFormatter dateTimeFormatter : dateTimeFormatters) {\n- if (!seenDateFormats.contains(dateTimeFormatter.format())) {\n- seenDateFormats.add(dateTimeFormatter.format());\n- this.dynamicDateTimeFormatters.add(dateTimeFormatter);\n- }\n- }\n- return builder;\n- }\n-\n- public Builder add(DynamicTemplate dynamicTemplate) {\n- this.dynamicTemplates.add(dynamicTemplate);\n+ public Builder dynamicDateTimeFormatter(Collection<FormatDateTimeFormatter> dateTimeFormatters) {\n+ this.dynamicDateTimeFormatters = new Explicit<>(dateTimeFormatters.toArray(new FormatDateTimeFormatter[0]), true);\n return this;\n }\n \n- public Builder add(DynamicTemplate... dynamicTemplate) {\n- for (DynamicTemplate template : dynamicTemplate) {\n- this.dynamicTemplates.add(template);\n- }\n+ public Builder dynamicTemplates(Collection<DynamicTemplate> templates) {\n+ this.dynamicTemplates = new Explicit<>(templates.toArray(new DynamicTemplate[0]), true);\n return this;\n }\n \n-\n @Override\n protected ObjectMapper createMapper(String name, String fullPath, boolean enabled, Nested nested, Dynamic dynamic,\n Boolean includeInAll, Map<String, Mapper> mappers, @Nullable Settings settings) {\n assert !nested.isNested();\n- FormatDateTimeFormatter[] dates = null;\n- if (dynamicDateTimeFormatters == null) {\n- dates = new FormatDateTimeFormatter[0];\n- } else if (dynamicDateTimeFormatters.isEmpty()) {\n- // add the default one\n- dates = Defaults.DYNAMIC_DATE_TIME_FORMATTERS;\n- } else {\n- dates = dynamicDateTimeFormatters.toArray(new FormatDateTimeFormatter[dynamicDateTimeFormatters.size()]);\n- }\n return new RootObjectMapper(name, enabled, dynamic, includeInAll, mappers,\n- dates,\n- dynamicTemplates.toArray(new DynamicTemplate[dynamicTemplates.size()]),\n+ dynamicDateTimeFormatters,\n+ dynamicTemplates,\n dateDetection, numericDetection);\n }\n }\n \n public static class TypeParser extends ObjectMapper.TypeParser {\n \n- @Override\n- protected ObjectMapper.Builder createBuilder(String name) {\n- return new Builder(name);\n- }\n-\n @Override\n public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n \n- ObjectMapper.Builder builder = createBuilder(name);\n+ RootObjectMapper.Builder builder = new Builder(name);\n Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator();\n while (iterator.hasNext()) {\n Map.Entry<String, Object> entry = iterator.next();\n@@ -143,26 +107,22 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n return builder;\n }\n \n- protected boolean processField(ObjectMapper.Builder builder, String fieldName, Object fieldNode,\n+ protected boolean processField(RootObjectMapper.Builder builder, String fieldName, Object fieldNode,\n Version indexVersionCreated) {\n if (fieldName.equals(\"date_formats\") || fieldName.equals(\"dynamic_date_formats\")) {\n- List<FormatDateTimeFormatter> dateTimeFormatters = new ArrayList<>();\n if (fieldNode instanceof List) {\n- for (Object node1 : (List) fieldNode) {\n- if (node1.toString().startsWith(\"epoch_\")) {\n- throw new MapperParsingException(\"Epoch [\"+ node1.toString() +\"] is not supported as dynamic date format\");\n+ List<FormatDateTimeFormatter> formatters = new ArrayList<>();\n+ for (Object formatter : (List<?>) fieldNode) {\n+ if (formatter.toString().startsWith(\"epoch_\")) {\n+ throw new MapperParsingException(\"Epoch [\"+ formatter +\"] is not supported as dynamic date format\");\n }\n- dateTimeFormatters.add(parseDateTimeFormatter(node1));\n+ formatters.add(parseDateTimeFormatter(formatter));\n }\n+ builder.dynamicDateTimeFormatter(formatters);\n } else if (\"none\".equals(fieldNode.toString())) {\n- dateTimeFormatters = null;\n- } else {\n- dateTimeFormatters.add(parseDateTimeFormatter(fieldNode));\n- }\n- if (dateTimeFormatters == null) {\n- ((Builder) builder).noDynamicDateTimeFormatter();\n+ builder.dynamicDateTimeFormatter(Collections.emptyList());\n } else {\n- ((Builder) builder).dynamicDateTimeFormatter(dateTimeFormatters);\n+ builder.dynamicDateTimeFormatter(Collections.singleton(parseDateTimeFormatter(fieldNode)));\n }\n return true;\n } else if (fieldName.equals(\"dynamic_templates\")) {\n@@ -175,7 +135,8 @@ protected boolean processField(ObjectMapper.Builder builder, String fieldName, O\n // }\n // }\n // ]\n- List tmplNodes = (List) fieldNode;\n+ List<?> tmplNodes = (List<?>) fieldNode;\n+ List<DynamicTemplate> templates = new ArrayList<>();\n for (Object tmplNode : tmplNodes) {\n Map<String, Object> tmpl = (Map<String, Object>) tmplNode;\n if (tmpl.size() != 1) {\n@@ -186,30 +147,30 @@ protected boolean processField(ObjectMapper.Builder builder, String fieldName, O\n Map<String, Object> templateParams = (Map<String, Object>) entry.getValue();\n DynamicTemplate template = DynamicTemplate.parse(templateName, templateParams, indexVersionCreated);\n if (template != null) {\n- ((Builder) builder).add(template);\n+ templates.add(template);\n }\n }\n+ builder.dynamicTemplates(templates);\n return true;\n } else if (fieldName.equals(\"date_detection\")) {\n- ((Builder) builder).dateDetection = lenientNodeBooleanValue(fieldNode);\n+ ((Builder) builder).dateDetection = new Explicit<>(nodeBooleanValue(fieldNode), true);\n return true;\n } else if (fieldName.equals(\"numeric_detection\")) {\n- ((Builder) builder).numericDetection = lenientNodeBooleanValue(fieldNode);\n+ ((Builder) builder).numericDetection = new Explicit<>(nodeBooleanValue(fieldNode), true);\n return true;\n }\n return false;\n }\n }\n \n- private final FormatDateTimeFormatter[] dynamicDateTimeFormatters;\n-\n- private final boolean dateDetection;\n- private final boolean numericDetection;\n-\n- private volatile DynamicTemplate dynamicTemplates[];\n+ private Explicit<FormatDateTimeFormatter[]> dynamicDateTimeFormatters;\n+ private Explicit<Boolean> dateDetection;\n+ private Explicit<Boolean> numericDetection;\n+ private Explicit<DynamicTemplate[]> dynamicTemplates;\n \n RootObjectMapper(String name, boolean enabled, Dynamic dynamic, Boolean includeInAll, Map<String, Mapper> mappers,\n- FormatDateTimeFormatter[] dynamicDateTimeFormatters, DynamicTemplate dynamicTemplates[], boolean dateDetection, boolean numericDetection) {\n+ Explicit<FormatDateTimeFormatter[]> dynamicDateTimeFormatters, Explicit<DynamicTemplate[]> dynamicTemplates,\n+ Explicit<Boolean> dateDetection, Explicit<Boolean> numericDetection) {\n super(name, name, enabled, Nested.NO, dynamic, includeInAll, mappers);\n this.dynamicTemplates = dynamicTemplates;\n this.dynamicDateTimeFormatters = dynamicDateTimeFormatters;\n@@ -220,21 +181,26 @@ protected boolean processField(ObjectMapper.Builder builder, String fieldName, O\n @Override\n public ObjectMapper mappingUpdate(Mapper mapper) {\n RootObjectMapper update = (RootObjectMapper) super.mappingUpdate(mapper);\n- // dynamic templates are irrelevant for dynamic mappings updates\n- update.dynamicTemplates = new DynamicTemplate[0];\n+ // for dynamic updates, no need to carry root-specific options, we just\n+ // set everything to they implicit default value so that they are not\n+ // applied at merge time\n+ update.dynamicTemplates = new Explicit<>(new DynamicTemplate[0], false);\n+ update.dynamicDateTimeFormatters = new Explicit<FormatDateTimeFormatter[]>(Defaults.DYNAMIC_DATE_TIME_FORMATTERS, false);\n+ update.dateDetection = new Explicit<>(Defaults.DATE_DETECTION, false);\n+ update.numericDetection = new Explicit<>(Defaults.NUMERIC_DETECTION, false);\n return update;\n }\n \n public boolean dateDetection() {\n- return this.dateDetection;\n+ return this.dateDetection.value();\n }\n \n public boolean numericDetection() {\n- return this.numericDetection;\n+ return this.numericDetection.value();\n }\n \n public FormatDateTimeFormatter[] dynamicDateTimeFormatters() {\n- return dynamicDateTimeFormatters;\n+ return dynamicDateTimeFormatters.value();\n }\n \n public Mapper.Builder findTemplateBuilder(ParseContext context, String name, XContentFieldType matchType) {\n@@ -264,7 +230,7 @@ public Mapper.Builder findTemplateBuilder(ParseContext context, String name, Str\n \n public DynamicTemplate findTemplate(ContentPath path, String name, XContentFieldType matchType) {\n final String pathAsString = path.pathAsText(name);\n- for (DynamicTemplate dynamicTemplate : dynamicTemplates) {\n+ for (DynamicTemplate dynamicTemplate : dynamicTemplates.value()) {\n if (dynamicTemplate.match(pathAsString, name, matchType)) {\n return dynamicTemplate;\n }\n@@ -281,21 +247,18 @@ public RootObjectMapper merge(Mapper mergeWith, boolean updateAllTypes) {\n protected void doMerge(ObjectMapper mergeWith, boolean updateAllTypes) {\n super.doMerge(mergeWith, updateAllTypes);\n RootObjectMapper mergeWithObject = (RootObjectMapper) mergeWith;\n- // merge them\n- List<DynamicTemplate> mergedTemplates = new ArrayList<>(Arrays.asList(this.dynamicTemplates));\n- for (DynamicTemplate template : mergeWithObject.dynamicTemplates) {\n- boolean replaced = false;\n- for (int i = 0; i < mergedTemplates.size(); i++) {\n- if (mergedTemplates.get(i).name().equals(template.name())) {\n- mergedTemplates.set(i, template);\n- replaced = true;\n- }\n- }\n- if (!replaced) {\n- mergedTemplates.add(template);\n- }\n+ if (mergeWithObject.numericDetection.explicit()) {\n+ this.numericDetection = mergeWithObject.numericDetection;\n+ }\n+ if (mergeWithObject.dateDetection.explicit()) {\n+ this.dateDetection = mergeWithObject.dateDetection;\n+ }\n+ if (mergeWithObject.dynamicDateTimeFormatters.explicit()) {\n+ this.dynamicDateTimeFormatters = mergeWithObject.dynamicDateTimeFormatters;\n+ }\n+ if (mergeWithObject.dynamicTemplates.explicit()) {\n+ this.dynamicTemplates = mergeWithObject.dynamicTemplates;\n }\n- this.dynamicTemplates = mergedTemplates.toArray(new DynamicTemplate[mergedTemplates.size()]);\n }\n \n @Override\n@@ -305,31 +268,31 @@ public RootObjectMapper updateFieldType(Map<String, MappedFieldType> fullNameToF\n \n @Override\n protected void doXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {\n- if (dynamicDateTimeFormatters != Defaults.DYNAMIC_DATE_TIME_FORMATTERS) {\n- if (dynamicDateTimeFormatters.length > 0) {\n- builder.startArray(\"dynamic_date_formats\");\n- for (FormatDateTimeFormatter dateTimeFormatter : dynamicDateTimeFormatters) {\n- builder.value(dateTimeFormatter.format());\n- }\n- builder.endArray();\n+ final boolean includeDefaults = params.paramAsBoolean(\"include_defaults\", false);\n+\n+ if (dynamicDateTimeFormatters.explicit() || includeDefaults) {\n+ builder.startArray(\"dynamic_date_formats\");\n+ for (FormatDateTimeFormatter dateTimeFormatter : dynamicDateTimeFormatters.value()) {\n+ builder.value(dateTimeFormatter.format());\n }\n+ builder.endArray();\n }\n \n- if (dynamicTemplates != null && dynamicTemplates.length > 0) {\n+ if (dynamicTemplates.explicit() || includeDefaults) {\n builder.startArray(\"dynamic_templates\");\n- for (DynamicTemplate dynamicTemplate : dynamicTemplates) {\n+ for (DynamicTemplate dynamicTemplate : dynamicTemplates.value()) {\n builder.startObject();\n builder.field(dynamicTemplate.name(), dynamicTemplate);\n builder.endObject();\n }\n builder.endArray();\n }\n \n- if (dateDetection != Defaults.DATE_DETECTION) {\n- builder.field(\"date_detection\", dateDetection);\n+ if (dateDetection.explicit() || includeDefaults) {\n+ builder.field(\"date_detection\", dateDetection.value());\n }\n- if (numericDetection != Defaults.NUMERIC_DETECTION) {\n- builder.field(\"numeric_detection\", numericDetection);\n+ if (numericDetection.explicit() || includeDefaults) {\n+ builder.field(\"numeric_detection\", numericDetection.value());\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/RootObjectMapper.java", "status": "modified" }, { "diff": "@@ -0,0 +1,161 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper;\n+\n+import org.elasticsearch.common.compress.CompressedXContent;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.mapper.MapperService.MergeReason;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n+\n+import java.util.Arrays;\n+\n+public class RootObjectMapperTests extends ESSingleNodeTestCase {\n+\n+ public void testNumericDetection() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .field(\"numeric_detection\", false)\n+ .endObject()\n+ .endObject().string();\n+ MapperService mapperService = createIndex(\"test\").mapperService();\n+ DocumentMapper mapper = mapperService.merge(\"type\", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ // update with a different explicit value\n+ String mapping2 = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .field(\"numeric_detection\", true)\n+ .endObject()\n+ .endObject().string();\n+ mapper = mapperService.merge(\"type\", new CompressedXContent(mapping2), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping2, mapper.mappingSource().toString());\n+\n+ // update with an implicit value: no change\n+ String mapping3 = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .endObject()\n+ .endObject().string();\n+ mapper = mapperService.merge(\"type\", new CompressedXContent(mapping3), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping2, mapper.mappingSource().toString());\n+ }\n+\n+ public void testDateDetection() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .field(\"date_detection\", true)\n+ .endObject()\n+ .endObject().string();\n+ MapperService mapperService = createIndex(\"test\").mapperService();\n+ DocumentMapper mapper = mapperService.merge(\"type\", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ // update with a different explicit value\n+ String mapping2 = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .field(\"date_detection\", false)\n+ .endObject()\n+ .endObject().string();\n+ mapper = mapperService.merge(\"type\", new CompressedXContent(mapping2), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping2, mapper.mappingSource().toString());\n+\n+ // update with an implicit value: no change\n+ String mapping3 = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .endObject()\n+ .endObject().string();\n+ mapper = mapperService.merge(\"type\", new CompressedXContent(mapping3), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping2, mapper.mappingSource().toString());\n+ }\n+\n+ public void testDateFormatters() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .field(\"dynamic_date_formats\", Arrays.asList(\"YYYY-MM-dd\"))\n+ .endObject()\n+ .endObject().string();\n+ MapperService mapperService = createIndex(\"test\").mapperService();\n+ DocumentMapper mapper = mapperService.merge(\"type\", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ // no update if formatters are not set explicitly\n+ String mapping2 = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .endObject()\n+ .endObject().string();\n+ mapper = mapperService.merge(\"type\", new CompressedXContent(mapping2), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ String mapping3 = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .field(\"dynamic_date_formats\", Arrays.asList())\n+ .endObject()\n+ .endObject().string();\n+ mapper = mapperService.merge(\"type\", new CompressedXContent(mapping3), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping3, mapper.mappingSource().toString());\n+ }\n+\n+ public void testDynamicTemplates() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .startArray(\"dynamic_templates\")\n+ .startObject()\n+ .startObject(\"my_template\")\n+ .field(\"match_mapping_type\", \"string\")\n+ .startObject(\"mapping\")\n+ .field(\"type\", \"keyword\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .endObject().string();\n+ MapperService mapperService = createIndex(\"test\").mapperService();\n+ DocumentMapper mapper = mapperService.merge(\"type\", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ // no update if templates are not set explicitly\n+ String mapping2 = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .endObject()\n+ .endObject().string();\n+ mapper = mapperService.merge(\"type\", new CompressedXContent(mapping2), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ String mapping3 = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .field(\"dynamic_templates\", Arrays.asList())\n+ .endObject()\n+ .endObject().string();\n+ mapper = mapperService.merge(\"type\", new CompressedXContent(mapping3), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping3, mapper.mappingSource().toString());\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/index/mapper/RootObjectMapperTests.java", "status": "added" } ] }
{ "body": "When a SearchContext is closed it's reader / searcher reference is closed too.\nIf this happens while a search is accessing it's reader reference it can lead\nto an unexpected `AlreadyClosedException` or worst case, an already closed MMapDirectory\nis access causing a `SIGSEV` like in #20008 (even though the window for this is very small).\n\nSearchContext can be closed concurrently if:\n- an index is deleted / removed from the node\n- a search context is idle for too long and is cleaned by the reaper\n- an explicit freeContext message is received\n\nThis change adds reference counting to the SearchContext base class and it's used\ninside SearchService each time the context is accessed.\n\nCloses #20008\n", "comments": [ { "body": "The change looks good. Maybe add some docs to SearchContext to explain why it is refcounted? I'm also curious how often you managed to get the test to fail before you added the refcount?\n", "created_at": "2016-08-22T07:57:44Z" }, { "body": "> I'm also curious how often you managed to get the test to fail before you added the refcount?\n\nwhen I set the iteration to 10k the test never passed for me. I was also able to reproduce the `SIGSEV` once running this test for 10 min.\n", "created_at": "2016-08-22T09:56:09Z" }, { "body": "@jpountz I pushed new changes\n", "created_at": "2016-08-22T10:03:42Z" }, { "body": "LGTM\n", "created_at": "2016-08-22T12:35:35Z" }, { "body": "LGTM\n", "created_at": "2016-08-22T13:38:45Z" }, { "body": "I am working on the backporting now\n", "created_at": "2016-08-22T13:41:15Z" } ], "number": 20095, "title": "Add ref-counting to SearchContext to prevent accessing already closed readers" }
{ "body": "When a SearchContext is closed it's reader / searcher reference is closed too.\nIf this happens while a search is accessing it's reader reference it can lead\nto an unexpected `AlreadyClosedException` or worst case, an already closed MMapDirectory\nis access causing a `SIGSEV` like in #20008 (even though the window for this is very small).\n\nSearchContext can be closed concurrently if:\n- an index is deleted / removed from the node\n- a search context is idle for too long and is cleaned by the reaper\n- an explicit freeContext message is received\n\nThis change adds reference counting to the SearchContext base class and it's used\ninside SearchService each time the context is accessed.\n\nRelates to #20008\nBackport of #20095\n", "number": 20104, "review_comments": [ { "body": "looks like the backport put the comment below the class name rather than above it\n", "created_at": "2016-08-22T17:50:32Z" } ], "title": "Add ref-counting to SearchContext to prevent accessing already closed readers" }
{ "commits": [ { "message": "Add ref-counting to SearchContext to prevent accessing already closed readers\n\nWhen a SearchContext is closed it's reader / searcher reference is closed too.\nIf this happens while a search is accessing it's reader reference it can lead\nto an unexpected `AlreadyClosedException` or worst case, an already closed MMapDirectory\nis access causing a `SIGSEV` like in #20008 (even though the window for this is very small).\n\nSearchContext can be closed concurrently if:\n * an index is deleted / removed from the node\n * a search context is idle for too long and is cleaned by the reaper\n * an explicit freeContext message is received\n\nThis change adds reference counting to the SearchContext base class and it's used\ninside SearchService each time the context is accessed.\n\nRelates to #20008\nBackport of #20095" }, { "message": "[TEST] Don't assert on null value. It's fine to not always see an exception in this part of the test." } ], "files": [ { "diff": "@@ -262,6 +262,7 @@ protected void doClose() {\n \n public DfsSearchResult executeDfsPhase(ShardSearchRequest request) {\n final SearchContext context = createAndPutContext(request);\n+ context.incRef();\n try {\n contextProcessing(context);\n dfsPhase.execute(context);\n@@ -281,6 +282,7 @@ public QuerySearchResult executeScan(ShardSearchRequest request) {\n final SearchContext context = createAndPutContext(request);\n final int originalSize = context.size();\n deprecationLogger.deprecated(\"[search_type=scan] is deprecated, please use a regular scroll that sorts on [_doc] instead\");\n+ context.incRef();\n try {\n if (context.aggregations() != null) {\n throw new IllegalArgumentException(\"aggregations are not supported with search_type=scan\");\n@@ -304,15 +306,19 @@ public QuerySearchResult executeScan(ShardSearchRequest request) {\n processFailure(context, e);\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n- context.size(originalSize);\n- cleanContext(context);\n+ try {\n+ context.size(originalSize);\n+ } finally {\n+ cleanContext(context);\n+ }\n }\n }\n \n public ScrollQueryFetchSearchResult executeScan(InternalScrollSearchRequest request) {\n final SearchContext context = findContext(request.id());\n ShardSearchStats shardSearchStats = context.indexShard().searchService();\n contextProcessing(context);\n+ context.incRef();\n try {\n processScroll(request, context);\n shardSearchStats.onPreQueryPhase(context);\n@@ -370,6 +376,7 @@ private void loadOrExecuteQueryPhase(final ShardSearchRequest request, final Sea\n public QuerySearchResultProvider executeQueryPhase(ShardSearchRequest request) {\n final SearchContext context = createAndPutContext(request);\n final ShardSearchStats shardSearchStats = context.indexShard().searchService();\n+ context.incRef();\n try {\n shardSearchStats.onPreQueryPhase(context);\n long time = System.nanoTime();\n@@ -402,6 +409,7 @@ public QuerySearchResultProvider executeQueryPhase(ShardSearchRequest request) {\n public ScrollQuerySearchResult executeQueryPhase(InternalScrollSearchRequest request) {\n final SearchContext context = findContext(request.id());\n ShardSearchStats shardSearchStats = context.indexShard().searchService();\n+ context.incRef();\n try {\n shardSearchStats.onPreQueryPhase(context);\n long time = System.nanoTime();\n@@ -427,6 +435,7 @@ public QuerySearchResult executeQueryPhase(QuerySearchRequest request) {\n context.searcher().setAggregatedDfs(request.dfs());\n IndexShard indexShard = context.indexShard();\n ShardSearchStats shardSearchStats = indexShard.searchService();\n+ context.incRef();\n try {\n shardSearchStats.onPreQueryPhase(context);\n long time = System.nanoTime();\n@@ -462,6 +471,7 @@ private boolean fetchPhaseShouldFreeContext(SearchContext context) {\n public QueryFetchSearchResult executeFetchPhase(ShardSearchRequest request) {\n final SearchContext context = createAndPutContext(request);\n contextProcessing(context);\n+ context.incRef();\n try {\n ShardSearchStats shardSearchStats = context.indexShard().searchService();\n shardSearchStats.onPreQueryPhase(context);\n@@ -502,6 +512,7 @@ public QueryFetchSearchResult executeFetchPhase(QuerySearchRequest request) {\n final SearchContext context = findContext(request.id());\n contextProcessing(context);\n context.searcher().setAggregatedDfs(request.dfs());\n+ context.incRef();\n try {\n ShardSearchStats shardSearchStats = context.indexShard().searchService();\n shardSearchStats.onPreQueryPhase(context);\n@@ -541,6 +552,7 @@ public QueryFetchSearchResult executeFetchPhase(QuerySearchRequest request) {\n public ScrollQueryFetchSearchResult executeFetchPhase(InternalScrollSearchRequest request) {\n final SearchContext context = findContext(request.id());\n contextProcessing(context);\n+ context.incRef();\n try {\n ShardSearchStats shardSearchStats = context.indexShard().searchService();\n processScroll(request, context);\n@@ -582,6 +594,7 @@ public FetchSearchResult executeFetchPhase(ShardFetchRequest request) {\n final SearchContext context = findContext(request.id());\n contextProcessing(context);\n final ShardSearchStats shardSearchStats = context.indexShard().searchService();\n+ context.incRef();\n try {\n if (request.lastEmittedDoc() != null) {\n context.scrollContext().lastEmittedDoc = request.lastEmittedDoc();\n@@ -703,6 +716,7 @@ private void freeAllContextForIndex(Index index) {\n public boolean freeContext(long id) {\n final SearchContext context = removeContext(id);\n if (context != null) {\n+ assert context.refCount() > 0 : \" refCount must be > 0: \" + context.refCount();\n try {\n context.indexShard().searchService().onFreeContext(context);\n if (context.scrollContext() != null) {\n@@ -734,9 +748,13 @@ private void contextProcessedSuccessfully(SearchContext context) {\n }\n \n private void cleanContext(SearchContext context) {\n- assert context == SearchContext.current();\n- context.clearReleasables(Lifetime.PHASE);\n- SearchContext.removeCurrent();\n+ try {\n+ assert context == SearchContext.current();\n+ context.clearReleasables(Lifetime.PHASE);\n+ SearchContext.removeCurrent();\n+ } finally {\n+ context.decRef();\n+ }\n }\n \n private void processFailure(SearchContext context, Throwable t) {\n@@ -1153,6 +1171,7 @@ public void run() {\n ShardSearchRequest request = new ShardSearchLocalRequest(indexShard.shardId(), indexMetaData.getNumberOfShards(),\n SearchType.QUERY_THEN_FETCH, entry.source(), entry.types(), entry.requestCache());\n context = createContext(request, warmerContext.searcher());\n+ context.incRef();\n // if we use sort, we need to do query to sort on it and load relevant field data\n // if not, we might as well set size=0 (and cache if needed)\n if (context.sort() == null) {\n@@ -1174,8 +1193,11 @@ public void run() {\n } finally {\n try {\n if (context != null) {\n- freeContext(context.id());\n- cleanContext(context);\n+ try {\n+ freeContext(context.id());\n+ } finally {\n+ cleanContext(context);\n+ }\n }\n } finally {\n latch.countDown();", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -35,6 +35,7 @@\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.common.util.concurrent.RefCounted;\n import org.elasticsearch.index.analysis.AnalysisService;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n@@ -69,8 +70,20 @@\n import java.util.List;\n import java.util.Map;\n import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicInteger;\n \n-public abstract class SearchContext extends DelegatingHasContextAndHeaders implements Releasable {\n+/**\n+ * This class encapsulates the state needed to execute a search. It holds a reference to the\n+ * shards point in time snapshot (IndexReader / ContextIndexSearcher) and allows passing on\n+ * state from one query / fetch phase to another.\n+ *\n+ * This class also implements {@link RefCounted} since in some situations like in {@link org.elasticsearch.search.SearchService}\n+ * a SearchContext can be closed concurrently due to independent events ie. when an index gets removed. To prevent accessing closed\n+ * IndexReader / IndexSearcher instances the SearchContext can be guarded by a reference count and fail if it's been closed by\n+ * an external event.\n+ */\n+// For reference why we use RefCounted here see #20095\n+public abstract class SearchContext extends DelegatingHasContextAndHeaders implements Releasable, RefCounted {\n \n private static ThreadLocal<SearchContext> current = new ThreadLocal<>();\n public final static int DEFAULT_TERMINATE_AFTER = 0;\n@@ -106,12 +119,8 @@ public ParseFieldMatcher parseFieldMatcher() {\n \n @Override\n public final void close() {\n- if (closed.compareAndSet(false, true)) { // prevent double release\n- try {\n- clearReleasables(Lifetime.CONTEXT);\n- } finally {\n- doClose();\n- }\n+ if (closed.compareAndSet(false, true)) { // prevent double closing\n+ decRef();\n }\n }\n \n@@ -378,4 +387,55 @@ public enum Lifetime {\n */\n CONTEXT\n }\n+\n+ // copied from AbstractRefCounted since this class subclasses already DelegatingHasContextAndHeaders\n+ // 5.x doesn't have this problem\n+ private final AtomicInteger refCount = new AtomicInteger(1);\n+\n+ @Override\n+ public final void incRef() {\n+ if (tryIncRef() == false) {\n+ alreadyClosed();\n+ }\n+ }\n+\n+ @Override\n+ public final boolean tryIncRef() {\n+ do {\n+ int i = refCount.get();\n+ if (i > 0) {\n+ if (refCount.compareAndSet(i, i + 1)) {\n+ return true;\n+ }\n+ } else {\n+ return false;\n+ }\n+ } while (true);\n+ }\n+\n+ @Override\n+ public final void decRef() {\n+ int i = refCount.decrementAndGet();\n+ assert i >= 0;\n+ if (i == 0) {\n+ try {\n+ clearReleasables(Lifetime.CONTEXT);\n+ } finally {\n+ doClose();\n+ }\n+ }\n+\n+ }\n+\n+ protected void alreadyClosed() {\n+ throw new IllegalStateException(\"search context is already closed can't increment refCount current count [\" + refCount() + \"]\");\n+ }\n+\n+ /**\n+ * Returns the current reference count.\n+ */\n+ public int refCount() {\n+ return this.refCount.get();\n+ }\n+ // end copy from AbstractRefCounted\n }", "filename": "core/src/main/java/org/elasticsearch/search/internal/SearchContext.java", "status": "modified" }, { "diff": "@@ -1671,8 +1671,8 @@ protected void afterAdd() throws IOException {\n }\n boolean atLeastOneFailed = false;\n for (Throwable ex : threadExceptions) {\n- assertTrue(ex.toString(), ex instanceof IOException || ex instanceof AlreadyClosedException);\n if (ex != null) {\n+ assertTrue(ex.toString(), ex instanceof IOException || ex instanceof AlreadyClosedException);\n atLeastOneFailed = true;\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java", "status": "modified" }, { "diff": "@@ -18,11 +18,29 @@\n */\n package org.elasticsearch.search;\n \n-\n+import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n+import com.carrotsearch.hppc.IntArrayList;\n+import org.apache.lucene.store.AlreadyClosedException;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.index.IndexResponse;\n+import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.search.internal.ShardSearchLocalRequest;\n+import org.elasticsearch.search.query.QuerySearchResultProvider;\n+\n+import java.io.IOException;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.Semaphore;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicLong;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.is;\n@@ -70,4 +88,77 @@ public void testClearIndexDelete() throws ExecutionException, InterruptedExcepti\n assertAcked(client().admin().indices().prepareDelete(\"index\"));\n assertEquals(0, service.getActiveContexts());\n }\n+\n+ public void testSearchWhileContextIsFreed() throws IOException, InterruptedException {\n+ createIndex(\"index\");\n+ client().prepareIndex(\"index\", \"type\", \"1\").setSource(\"field\", \"value\").setRefresh(true).get();\n+\n+ final SearchService service = getInstanceFromNode(SearchService.class);\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ final IndexService indexService = indicesService.indexServiceSafe(\"index\");\n+ final IndexShard indexShard = indexService.shard(0);\n+ final AtomicBoolean running = new AtomicBoolean(true);\n+ final CountDownLatch startGun = new CountDownLatch(1);\n+ final Semaphore semaphore = new Semaphore(Integer.MAX_VALUE);\n+ final AtomicLong contextId = new AtomicLong(0);\n+ final Thread thread = new Thread() {\n+ @Override\n+ public void run() {\n+ startGun.countDown();\n+ while(running.get()) {\n+ service.freeContext(contextId.get());\n+ if (randomBoolean()) {\n+ // here we trigger some refreshes to ensure the IR go out of scope such that we hit ACE if we access a search\n+ // context in a non-sane way.\n+ try {\n+ semaphore.acquire();\n+ } catch (InterruptedException e) {\n+ throw new AssertionError(e);\n+ }\n+ client().prepareIndex(\"index\", \"type\").setSource(\"field\", \"value\")\n+ .setRefresh(randomBoolean()).execute(new ActionListener<IndexResponse>() {\n+ @Override\n+ public void onResponse(IndexResponse indexResponse) {\n+ semaphore.release();\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ semaphore.release();\n+ }\n+ });\n+ }\n+ }\n+ }\n+ };\n+ thread.start();\n+ startGun.await();\n+ try {\n+ final int rounds = scaledRandomIntBetween(100, 10000);\n+ for (int i = 0; i < rounds; i++) {\n+ try {\n+ QuerySearchResultProvider querySearchResultProvider = service.executeQueryPhase(\n+ new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT,\n+ new BytesArray(\"\"), new String[0], false));\n+ contextId.set(querySearchResultProvider.id());\n+ IntArrayList intCursors = new IntArrayList(1);\n+ intCursors.add(0);\n+ ShardFetchSearchRequest req = new ShardFetchSearchRequest(new SearchRequest()\n+ ,querySearchResultProvider.id(), intCursors, null /* not a scroll */);\n+ service.executeFetchPhase(req);\n+ } catch (AlreadyClosedException ex) {\n+ throw ex;\n+ } catch (IllegalStateException ex) {\n+ assertEquals(\"search context is already closed can't increment refCount current count [0]\", ex.getMessage());\n+ } catch (SearchContextMissingException ex) {\n+ // that's fine\n+ }\n+ }\n+ } finally {\n+ running.set(false);\n+ thread.join();\n+ semaphore.acquire(Integer.MAX_VALUE);\n+ }\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/search/SearchServiceTests.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue. Note that whether you're filing a bug report or a\nfeature request, ensure that your submission is for an\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\nBug reports on an OS that we do not support or feature requests\nspecific to an OS that we do not support will be closed.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 2.3.3 and 2.1.1\n\n**Plugins installed**: []\n\n**JVM version**: 1.8.0_92\n\n**OS version**: OS X El Capitan 10.11.5\n\n**Description of the problem including expected versus actual behavior**:\n\nWhen migrating an Elasticsearch cluster from 2.1.1 to 2.3.3, a previously working code started to failing and I looked to https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes-2.3.html trying to make sense of the failure sadly didn't see any mention of the change so I am unsure whether this is a regression bug or documentation issue.\n\nThe problem that I am referring to is when using the update API, which we use to do upsert of documents which works okay in 2.1.1 when index is not created already (given that `action.auto_create_index: true`) but fail with 2.3.3 with same configuration complaining that index was not found.\n\n**Steps to reproduce**:\n1. Using 2.3.3 run `curl -XPOST http://localhost:9200/index/doc_type/1/_update -d '{\"doc\": {\"name\" : \"test\"}, \"doc_as_upsert\": true}'`\n2. Using 2.1.1 try same query as before.\n\nWith 2.3.3 you get:\n\n```\n{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"resource.type\":\"index_expression\",\"resource.id\":\"...\",\"index\":\"...\"}],\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"resource.type\":\"index_expression\",\"resource.id\":\"...\",\"index\":\"...\"},\"status\":404}\n```\n\nWith 2.1.1 it works as expected.\n\n**Provide logs (if relevant)**:\n\n```\n[dummy-index] IndexNotFoundException[no such index]\n at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:151)\n at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:95)\n at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteSingleIndex(IndexNameExpressionResolver.java:208)\n at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction.doStart(TransportInstanceSingleOperationAction.java:138)\n at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction.start(TransportInstanceSingleOperationAction.java:123)\n at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction.doExecute(TransportInstanceSingleOperationAction.java:73)\n at org.elasticsearch.action.update.TransportUpdateAction.innerExecute(TransportUpdateAction.java:147)\n at org.elasticsearch.action.update.TransportUpdateAction.doExecute(TransportUpdateAction.java:142)\n at org.elasticsearch.action.update.TransportUpdateAction.doExecute(TransportUpdateAction.java:66)\n at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:149)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85)\n at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)\n at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)\n at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52)\n at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.doExecute(BaseRestHandler.java:83)\n at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)\n at org.elasticsearch.client.support.AbstractClient.update(AbstractClient.java:396)\n at org.elasticsearch.rest.action.update.RestUpdateAction.handleRequest(RestUpdateAction.java:126)\n at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:54)\n at org.elasticsearch.rest.RestController.executeHandler(RestController.java:205)\n at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166)\n at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128)\n at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86)\n at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:449)\n at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:61)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60)\n at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:194)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:135)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\n<!--\nIf you are filing a feature request, please remove the above bug\nreport block and provide responses for all of the below items.\n-->\n", "comments": [ { "body": "I tested your recreation with ES 2.3.3 and it worked fine. The index is created as expected and it fails if `action.auto_create_index` is set to false. I am closing the issue since I am not able to reproduce, feel free to reopen if you find a recreation that does not work.\n", "created_at": "2016-08-09T13:00:39Z" }, { "body": "@jimferenczi Sadly I cannot reopen the issue but I am updating here with reproducible steps.\n\n### Configuration\n\nInstall both elasticsearch versions 2.1.0 and 2.3.0, for each do the following:\n\nChange configuration so that `action.auto_create_index: true` and `index.mapper.dynamic: false`. \n\nTrying to create now a document fail as expected, \n\n```\nPOST /twitter:0/tweet/1/_update\n{\n \"doc\": {\n \"message\": \"test\"\n },\n \"doc_as_upsert\": true\n}\n\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"type_missing_exception\",\n \"reason\": \"type[[tweet, trying to auto create mapping, but dynamic mapping is disabled]] missing\",\n \"index\": \"twitter:0\"\n }\n ],\n \"type\": \"type_missing_exception\",\n \"reason\": \"type[[tweet, trying to auto create mapping, but dynamic mapping is disabled]] missing\",\n \"index\": \"twitter:0\"\n },\n \"status\": 404\n} \n```\n\nCreate a template: \n\n```\nPUT /_template/twitter\n{\n \"template\": \"*\",\n \"order\": 0,\n \"mappings\": {\n \"tweet\": {\n \"_source\": {\n \"enabled\": true\n },\n \"properties\": {\n \"message\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n```\n\n### Issue:\n\nNow this is what happen when doing an upsert with both elasticsearch versions:\n\n```\nPOST /twitter:0/tweet/1/_update\n{\n \"doc\": {\n \"message\": \"test\"\n },\n \"doc_as_upsert\": true\n}\n```\n\n**Elasticsearch 2.1.0:**\n\n```\n{\n \"_index\": \"twitter:0\",\n \"_type\": \"tweet\",\n \"_id\": \"1\",\n \"_version\": 1,\n \"_shards\": {\n \"total\": 2,\n \"successful\": 1,\n \"failed\": 0\n }\n}\n```\n\n**Elasticsearch 2.3.0:**\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"index_not_found_exception\",\n \"reason\": \"no such index\",\n \"resource.type\": \"index_expression\",\n \"resource.id\": \"twitter:0\",\n \"index\": \"twitter:0\"\n }\n ],\n \"type\": \"index_not_found_exception\",\n \"reason\": \"no such index\",\n \"resource.type\": \"index_expression\",\n \"resource.id\": \"twitter:0\",\n \"index\": \"twitter:0\"\n },\n \"status\": 404\n}\n```\n\nThanks again for looking at this bug\n", "created_at": "2016-08-17T20:18:24Z" }, { "body": "thanks for the recreation @mouadino \n\nI hoped this would be fixed by https://github.com/elastic/elasticsearch/pull/19478 but apparently not. This still fails on 2.4.0. \n\nI think that theoretically this should work, but it might be complicated. The logic should be:\n- does the index exist?\n- if not, can i create the index? (check auto create)\n- is there a matching template?\n- create the index with the mappings\n- does the mapping exist?\n- if not, can i create the mapping? (check index.mapper.dynamic)\n- create the mapping\n\nThe downside here is that (with auto create true, and mapper dynamic false) you can end up creating an index even though you can't create the type. But i think that makes sense.\n\n@jimferenczi could you take a look please?\n", "created_at": "2016-08-18T09:43:49Z" }, { "body": "> I hoped this would be fixed by #19478 but apparently not. This still fails on 2.4.0.\n\nWhen set at the node level we don't check if a template would have matched. https://github.com/elastic/elasticsearch/pull/19478 fixed the case where it is set at the index level for instance within the template:\n\n```\n{\n \"template\": \"*\",\n \"order\":0,\n \"settings\": {\n \"index.mapper.dynamic\": false\n },\n \"mappings\": {\n \"tweet\": {\n \"_source\":{\n \"enabled\":true\n },\n \"properties\": {\n \"message\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n```\n\n... this works with the downside that @clintongormley mentioned:\n\n> The downside here is that (with auto create true, and mapper dynamic false) you can end up creating an index even though you can't create the type.\n\nSince it is not possible to set this setting at the node level in 5.x I don't think we should try to fix it in 2.x but rather document the alternative ?\n", "created_at": "2016-08-18T10:11:27Z" }, { "body": "> Since it is not possible to set this setting at the node level in 5.x I don't think we should try to fix it in 2.x but rather document the alternative ?\n\nAhhh thanks, I'd forgotten about that. Same reason i closed a similar issue previously.\n\n+1 to docs\n", "created_at": "2016-08-18T10:53:36Z" } ], "number": 19857, "title": "Update API doesn't work anymore when index is not created yet" }
{ "body": "This is not supported in 5.x (index settings cannot be set at the cluster level) and should be replace with a template for all indices.\nrelates #19857 \n", "number": 20096, "review_comments": [], "title": "Fix docs stating that index.mapper.dynamic can be set for all node in the elasticsearch.yml file. " }
{ "commits": [ { "message": "Fix docs stating that index.mapper.dynamic can be set for all nodes in the elasticsearch.yml file. This is not supported in 5.x (index settings cannot be set at the cluster level) and should be replace with a template for all indices." } ], "files": [ { "diff": "@@ -40,15 +40,32 @@ automatically or explicitly.\n [float]\n === Disabling automatic type creation\n \n-Automatic type creation can be disabled by setting the `index.mapper.dynamic`\n-setting to `false`, either by setting the default value in the\n-`config/elasticsearch.yml` file, or per-index as an index setting:\n+Automatic type creation can be disabled per-index by setting the `index.mapper.dynamic`\n+setting to `false` in the index settings:\n \n [source,js]\n --------------------------------------------------\n-PUT data/_settings <1>\n+PUT data/_settings\n {\n- \"index.mapper.dynamic\":false\n+ \"index.mapper.dynamic\":false <1>\n+}\n+--------------------------------------------------\n+// CONSOLE\n+// TEST[continued]\n+\n+<1> Disable automatic type creation for the index named \"data\".\n+\n+Automatic type creation can also be disabled for all indices by setting an index template:\n+\n+[source,js]\n+--------------------------------------------------\n+PUT _template/template_all\n+{\n+ \"template\": \"*\",\n+ \"order\":0,\n+ \"settings\": {\n+ \"index.mapper.dynamic\": false <1>\n+ }\n }\n --------------------------------------------------\n // CONSOLE", "filename": "docs/reference/mapping/dynamic-mapping.asciidoc", "status": "modified" } ] }
{ "body": "Reproducible using Java API 1.3.x. See repro at : https://gist.github.com/ppf2/1a14a31b3add6c450c10\n\nIn short, if you have a doc type with source disabled and if you attempt to use the Java API to search for docs of this type specifying source filtering via SearchSourceBuilder, you will receive an unintuitive exception:\n\n``` java\nException in thread \"main\" org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to execute phase [query_fetch], all shards failed; shardFailures {[bqSWDvHIT7WCtEJKymhnHA][disable_source][0]: ElasticsearchIllegalArgumentException[No matching content type for null]}\n```\n", "comments": [ { "body": "+1\n", "created_at": "2015-08-13T12:39:59Z" }, { "body": "On latest master this causes an NPE. I'll take a look, it seems like a similar fix to #18957.\n\n```\nPUT disable_source\n{\n \"settings\": {\n \"number_of_replicas\": 0,\n \"number_of_shards\": 1\n },\n \"mappings\": {\n \"type\": {\n \"_source\": {\n \"enabled\": false\n }\n }\n }\n}\n\nPOST disable_source/type/1\n{\n \"text1\": \"hello1\",\n \"text2\": \"hello2\"\n}\n\nPOST disable_source/_search\n{\n \"_source\": [\n \"text1\",\n \"text2\"\n ]\n}\n```\n\nNPE response:\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"null_pointer_exception\",\n \"reason\": null\n }\n ],\n \"type\": \"search_phase_execution_exception\",\n \"reason\": \"all shards failed\",\n \"phase\": \"query_fetch\",\n \"grouped\": true,\n \"failed_shards\": [\n {\n \"shard\": 0,\n \"index\": \"disable_source\",\n \"node\": \"71DMW_mRRzKDjQFNDpU4Iw\",\n \"reason\": {\n \"type\": \"null_pointer_exception\",\n \"reason\": null\n }\n }\n ],\n \"caused_by\": {\n \"type\": \"null_pointer_exception\",\n \"reason\": null\n }\n },\n \"status\": 500\n}\n```\n", "created_at": "2016-08-20T21:37:25Z" } ], "number": 7758, "title": "Better handling when attempting to search with source filtering against docs with disabled source" }
{ "body": "Fix NPE during search with source filtering if the source is disabled.\nInstead of throwing an NPE, search response hits with source filtering will not contain the source if it is disabled in the mapping.\n\nTests pass: `gradle test` `gradle core:integTest`\n\nCloses #7758\n", "number": 20093, "review_comments": [ { "body": "Maybe rename this `testBasicOperation` or something?\n", "created_at": "2016-08-26T19:00:22Z" }, { "body": "Personally I'd write this as `assertEquals(singletonMap(\"field\", \"value\"), hitContext.hit().sourceAsMap());`. If you like it better the way you have it that is fine too.\n", "created_at": "2016-08-26T19:02:22Z" } ], "title": "Fix NPE during search with source filtering if the source is disabled." }
{ "commits": [ { "message": "Fix NPE during search with source filtering if the source is disabled.\nInstead of throwing an NPE, a search response with source filtering will not contain the source if it is disabled in the mapping.\n\nCloses #7758" }, { "message": "Created unit tests for FetchSourceSubPhase. Tests similar to SourceFetchingIT.\nRemoved SourceFetchingIT#testSourceDisabled (now covered via unit test FetchSourceSubPhaseTests#testSourceDisabled)." }, { "message": "Updated FetchSouceSubPhase unit tests per comments.\nRenamed main unit test method.\nUse assertEquals and assertNull instead of assertThat (less code)." } ], "files": [ { "diff": "@@ -35,19 +35,22 @@ public void hitExecute(SearchContext context, HitContext hitContext) {\n if (context.sourceRequested() == false) {\n return;\n }\n+ SourceLookup source = context.lookup().source();\n+ if (source.internalSourceRef() == null) {\n+ return; // source disabled in the mapping\n+ }\n FetchSourceContext fetchSourceContext = context.fetchSourceContext();\n assert fetchSourceContext.fetchSource();\n if (fetchSourceContext.includes().length == 0 && fetchSourceContext.excludes().length == 0) {\n- hitContext.hit().sourceRef(context.lookup().source().internalSourceRef());\n+ hitContext.hit().sourceRef(source.internalSourceRef());\n return;\n }\n \n- SourceLookup source = context.lookup().source();\n Object value = source.filter(fetchSourceContext.includes(), fetchSourceContext.excludes());\n try {\n final int initialCapacity = Math.min(1024, source.internalSourceRef().length());\n BytesStreamOutput streamOutput = new BytesStreamOutput(initialCapacity);\n- XContentBuilder builder = new XContentBuilder(context.lookup().source().sourceContentType().xContent(), streamOutput);\n+ XContentBuilder builder = new XContentBuilder(source.sourceContentType().xContent(), streamOutput);\n builder.value(value);\n hitContext.hit().sourceRef(builder.bytes());\n } catch (IOException e) {", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java", "status": "modified" }, { "diff": "@@ -0,0 +1,134 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.fetch.subphase;\n+\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.search.fetch.FetchSubPhase;\n+import org.elasticsearch.search.internal.InternalSearchHit;\n+import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.lookup.SearchLookup;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.TestSearchContext;\n+\n+import java.io.IOException;\n+import java.util.Collections;\n+\n+public class FetchSourceSubPhaseTests extends ESTestCase {\n+\n+ static class FetchSourceSubPhaseTestSearchContext extends TestSearchContext {\n+\n+ FetchSourceContext context;\n+ BytesReference source;\n+\n+ FetchSourceSubPhaseTestSearchContext(FetchSourceContext context, BytesReference source) {\n+ super(null);\n+ this.context = context;\n+ this.source = source;\n+ }\n+\n+ @Override\n+ public boolean sourceRequested() {\n+ return context != null && context.fetchSource();\n+ }\n+\n+ @Override\n+ public FetchSourceContext fetchSourceContext() {\n+ return context;\n+ }\n+\n+ @Override\n+ public SearchLookup lookup() {\n+ SearchLookup lookup = super.lookup();\n+ lookup.source().setSource(source);\n+ return lookup;\n+ }\n+ }\n+\n+ public void testFetchSource() throws IOException {\n+ XContentBuilder source = XContentFactory.jsonBuilder().startObject()\n+ .field(\"field\", \"value\")\n+ .endObject();\n+ FetchSubPhase.HitContext hitContext = hitExecute(source, true, null, null);\n+ assertEquals(Collections.singletonMap(\"field\",\"value\"), hitContext.hit().sourceAsMap());\n+ }\n+\n+ public void testBasicFiltering() throws IOException {\n+ XContentBuilder source = XContentFactory.jsonBuilder().startObject()\n+ .field(\"field1\", \"value\")\n+ .field(\"field2\", \"value2\")\n+ .endObject();\n+ FetchSubPhase.HitContext hitContext = hitExecute(source, false, null, null);\n+ assertNull(hitContext.hit().sourceAsMap());\n+\n+ hitContext = hitExecute(source, true, \"field1\", null);\n+ assertEquals(Collections.singletonMap(\"field1\",\"value\"), hitContext.hit().sourceAsMap());\n+\n+ hitContext = hitExecute(source, true, \"hello\", null);\n+ assertEquals(Collections.emptyMap(), hitContext.hit().sourceAsMap());\n+\n+ hitContext = hitExecute(source, true, \"*\", \"field2\");\n+ assertEquals(Collections.singletonMap(\"field1\",\"value\"), hitContext.hit().sourceAsMap());\n+ }\n+\n+ public void testMultipleFiltering() throws IOException {\n+ XContentBuilder source = XContentFactory.jsonBuilder().startObject()\n+ .field(\"field\", \"value\")\n+ .field(\"field2\", \"value2\")\n+ .endObject();\n+ FetchSubPhase.HitContext hitContext = hitExecuteMultiple(source, true, new String[]{\"*.notexisting\", \"field\"}, null);\n+ assertEquals(Collections.singletonMap(\"field\",\"value\"), hitContext.hit().sourceAsMap());\n+\n+ hitContext = hitExecuteMultiple(source, true, new String[]{\"field.notexisting.*\", \"field\"}, null);\n+ assertEquals(Collections.singletonMap(\"field\",\"value\"), hitContext.hit().sourceAsMap());\n+ }\n+\n+ public void testSourceDisabled() throws IOException {\n+ FetchSubPhase.HitContext hitContext = hitExecute(null, true, null, null);\n+ assertNull(hitContext.hit().sourceAsMap());\n+\n+ hitContext = hitExecute(null, false, null, null);\n+ assertNull(hitContext.hit().sourceAsMap());\n+\n+ hitContext = hitExecute(null, true, \"field1\", null);\n+ assertNull(hitContext.hit().sourceAsMap());\n+\n+ hitContext = hitExecuteMultiple(null, true, new String[]{\"*\"}, new String[]{\"field2\"});\n+ assertNull(hitContext.hit().sourceAsMap());\n+ }\n+\n+ private FetchSubPhase.HitContext hitExecute(XContentBuilder source, boolean fetchSource, String include, String exclude) {\n+ return hitExecuteMultiple(source, fetchSource,\n+ include == null ? Strings.EMPTY_ARRAY : new String[]{include},\n+ exclude == null ? Strings.EMPTY_ARRAY : new String[]{exclude});\n+ }\n+\n+ private FetchSubPhase.HitContext hitExecuteMultiple(XContentBuilder source, boolean fetchSource, String[] includes, String[] excludes) {\n+ FetchSourceContext fetchSourceContext = new FetchSourceContext(fetchSource, includes, excludes);\n+ SearchContext searchContext = new FetchSourceSubPhaseTestSearchContext(fetchSourceContext, source == null ? null : source.bytes());\n+ FetchSubPhase.HitContext hitContext = new FetchSubPhase.HitContext();\n+ hitContext.reset(new InternalSearchHit(1, null, null, null), null, 1, null);\n+ FetchSourceSubPhase phase = new FetchSourceSubPhase();\n+ phase.hitExecute(searchContext, hitContext);\n+ return hitContext;\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhaseTests.java", "status": "added" } ] }
{ "body": "**Elasticsearch version**:\nElasticsearch 5.0.0-alpha4\n\nSetting `cluster.routing.allocation.same_shard.host` (in elasticsearch.yml or command line arguments) throws exception and prevents node to start.\n\n```\n[2016-08-18 13:55:11,399][INFO ][node ] [Phineas T. Horton] version[5.0.0-alpha4], pid[82298], build[3f5b994/2016-06-27T16:23:46.861Z], OS[Mac OS X/10.11.6/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_60/25.60-b23]\n[2016-08-18 13:55:11,400][INFO ][node ] [Phineas T. Horton] initializing ...\n[2016-08-18 13:55:12,213][INFO ][plugins ] [Phineas T. Horton] modules [percolator, lang-mustache, lang-painless, reindex, aggs-matrix-stats, lang-expression, ingest-common, lang-groovy], plugins [analysis-kuromoji, x-pack]\nException in thread \"main\" java.lang.IllegalArgumentException: unknown setting [cluster.routing.allocation.same_shard.host] did you mean any of [cluster.routing.allocation.balance.shard, cluster.routing.allocation.balance.threshold, cluster.routing.allocation.disk.watermark.high, cluster.routing.allocation.disk.watermark.low, cluster.routing.allocation.total_shards_per_node]?\n at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:270)\n at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:238)\n at org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:138)\n at org.elasticsearch.node.Node.<init>(Node.java:236)\n at org.elasticsearch.node.Node.<init>(Node.java:173)\n at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:175)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:175)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:250)\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:96)\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:91)\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:91)\n at org.elasticsearch.cli.Command.main(Command.java:53)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:70)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:63)\n```\n", "comments": [], "number": 20045, "title": "Can't set cluster.routing.allocation.same_shard.host " }
{ "body": "Fixes #20045\n", "number": 20046, "review_comments": [ { "body": "I'm unsure if this setting should be dynamic.\n", "created_at": "2016-08-18T10:53:26Z" }, { "body": "As this setting should usually be only set once, it is probably simpler to leave it non-dynamic (as @jasontedor suggested and as it was before this PR). In case where this must absolutely be updated on a production cluster, rolling restart (of master nodes) with config update is always possible.\n", "created_at": "2016-08-18T13:00:12Z" }, { "body": "Right. I wondered if it should be dynamic or not.\nThe reason I made it dynamic is [document](https://www.elastic.co/guide/en/elasticsearch/reference/master/shards-allocation.html) says it's dynamic.\n(and thought making it dynamic maybe useful)\n\nI'll make it non-dynamic.\n", "created_at": "2016-08-18T13:10:39Z" } ], "title": "Move cluster.routing.allocation.same_shard.host setting to new settings infrastructure" }
{ "commits": [ { "message": "Move cluster.routing.allocation.same_shard.host setting to new settings infrastructure\n\nFixes #20045" } ], "files": [ { "diff": "@@ -23,13 +23,14 @@\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n \n /**\n * An allocation decider that prevents multiple instances of the same shard to\n * be allocated on the same <tt>node</tt>.\n *\n- * The {@value #SAME_HOST_SETTING} setting allows to perform a check to prevent\n+ * The {@link #CLUSTER_ROUTING_ALLOCATION_SAME_HOST_SETTING} setting allows to perform a check to prevent\n * allocation of multiple instances of the same shard on a single <tt>host</tt>,\n * based on host name and host address. Defaults to `false`, meaning that no\n * check is performed by default.\n@@ -44,14 +45,15 @@ public class SameShardAllocationDecider extends AllocationDecider {\n \n public static final String NAME = \"same_shard\";\n \n- public static final String SAME_HOST_SETTING = \"cluster.routing.allocation.same_shard.host\";\n+ public static final Setting<Boolean> CLUSTER_ROUTING_ALLOCATION_SAME_HOST_SETTING =\n+ Setting.boolSetting(\"cluster.routing.allocation.same_shard.host\", false, Setting.Property.NodeScope);\n \n private final boolean sameHost;\n \n public SameShardAllocationDecider(Settings settings) {\n super(settings);\n \n- this.sameHost = settings.getAsBoolean(SAME_HOST_SETTING, false);\n+ this.sameHost = CLUSTER_ROUTING_ALLOCATION_SAME_HOST_SETTING.get(settings);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SameShardAllocationDecider.java", "status": "modified" }, { "diff": "@@ -39,6 +39,7 @@\n import org.elasticsearch.cluster.routing.allocation.decider.ConcurrentRebalanceAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.SameShardAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n@@ -198,6 +199,7 @@ public void apply(Settings value, Settings current, Settings previous) {\n DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED_SETTING,\n DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_INCLUDE_RELOCATIONS_SETTING,\n DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_REROUTE_INTERVAL_SETTING,\n+ SameShardAllocationDecider.CLUSTER_ROUTING_ALLOCATION_SAME_HOST_SETTING,\n InternalClusterInfoService.INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL_SETTING,\n InternalClusterInfoService.INTERNAL_CLUSTER_INFO_TIMEOUT_SETTING,\n SnapshotInProgressAllocationDecider.CLUSTER_ROUTING_ALLOCATION_SNAPSHOT_RELOCATION_ENABLED_SETTING,", "filename": "core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java", "status": "modified" }, { "diff": "@@ -47,8 +47,8 @@ public class SameShardRoutingTests extends ESAllocationTestCase {\n private final ESLogger logger = Loggers.getLogger(SameShardRoutingTests.class);\n \n public void testSameHost() {\n- AllocationService strategy = createAllocationService(Settings.builder()\n- .put(SameShardAllocationDecider.SAME_HOST_SETTING, true).build());\n+ AllocationService strategy = createAllocationService(\n+ Settings.builder().put(SameShardAllocationDecider.CLUSTER_ROUTING_ALLOCATION_SAME_HOST_SETTING.getKey(), true).build());\n \n MetaData metaData = MetaData.builder()\n .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)).numberOfShards(2).numberOfReplicas(1))", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/SameShardRoutingTests.java", "status": "modified" } ] }
{ "body": "I was looking through the indices on one of our hosts and saw some indices that started with - (dash), eg \"-2016.04.15\". I'm not sure why it was there - but not to critical, elasticsearch lets you do it.\n\n```\nPOST -2016.12.12/test\n{\n \"name\":\"abc\"\n}\n```\n\nI tried to delete these indices by issuing\n `DELETE -2016.*`\n\nThe problem is that this was interpreted as \n`DELETE everything except for indices starting with 2016.`\nwhich basically means delete the entire database - and after a few poignant seconds, that's what it did. \n\nI have since become acquainted with https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-index.html and the ability to include or exclude indices with the + or - operator, but it seems that this is more dangerous than useful, at least if you are unfortunate enough to have indices that start with -.\n\nI understand that it's a \"feature\", but it doesn't seems practically so useful. Perhaps there could be a special query string for DELETE like \"wildcard=inclusive\" or \"=exclusive\"... As it is now, I'm not even sure how I would delete the indices that start with \"-2016.\" I can't do \"+-2016.*\"\n", "comments": [ { "body": "So curiously, you can `DELETE -whole_index_name`. It's only if you specify the wildcard that the `+/-` behaviour kicks in. This in itself sounds like a bug.\n\nIn order to remove ambiguity, I think we should prevent index names starting with `+` or `-`.\n\nRelated https://github.com/elastic/elasticsearch/issues/9059\n", "created_at": "2016-08-04T12:47:21Z" }, { "body": "yes.. that would be fine.\n", "created_at": "2016-08-04T13:33:39Z" }, { "body": "Discussed in Fix It Friday and we agreed that we should fix the bug that the `+/-` behaviour does not work unless there is a wildcard, and also prevent index names starting with a `+` or `-`\n", "created_at": "2016-08-05T09:49:54Z" }, { "body": "@colings86 \nI'd like to pull request for this issue. but I have some questions to ask:\n1. what do u mean by \"the +/- behaviour does not work unless there is a wildcard\", let's say I had only one index \"twitter\", and I delete with name \"+twitter\", what will happen? take as index not found or delete twitter?\n", "created_at": "2016-08-09T09:04:26Z" }, { "body": "@colings86 \nalso is there anywhere ES define \"wildcard\"?\n", "created_at": "2016-08-09T09:08:43Z" }, { "body": "FWIW - the case of using an exclusion in the index name in the docs was together with an inclusion - `+test*,-test3` I question the usefulness of an exclusion by itself. How often do you really want to do `DELETE -test-*` and doing a `+test*` is not needed, because it's inherently \"+\". You would just do `DELETE test*`\n\nI would offer that because the implications of someone misunderstanding and and its questionable need, perhaps you should consider putting it in a query string option, then it's easier that it's more intentional. eg to do the command that would now be `DELETE -test-*`, instead do a `DELETE *?exclude=test-*`. Then it's much more obvious what you're doing but you still have the same power. \n\nThe truth is that this is really mainly a problem with DELETE, perhaps these changes should just be made here.\n", "created_at": "2016-08-09T09:52:55Z" }, { "body": "note - some of this discussion is still relevant even if you remove dashes from the start of queries. DELETE -test\\* is still just as dangerous and might not be obvious to some users what would happen.\n", "created_at": "2016-08-09T09:57:18Z" } ], "number": 19800, "title": "indices starting with - (dash) cause problems if used with wildcards" }
{ "body": "Previously this was possible, which was problematic when issuing a\nrequest like `DELETE /-myindex`, which was interpretted as \"delete\neverything except for myindex\".\n\nResolves #19800\n", "number": 20033, "review_comments": [ { "body": "Maybe we can add a note stating that previously created indices starting with a hyphen or a `+` will still operate as normal?\n", "created_at": "2016-08-17T21:03:57Z" }, { "body": "Sure, will add and merge\n", "created_at": "2016-08-17T21:12:16Z" } ], "title": "Disallow creating indices starting with '-' or '+'" }
{ "commits": [ { "message": "Disallow creating indices starting with '-' or '+'\n\nPreviously this was possible, which was problematic when issuing a\nrequest like `DELETE /-myindex`, which was interpretted as \"delete\neverything except for myindex\".\n\nResolves #19800" } ], "files": [ { "diff": "@@ -144,7 +144,7 @@ public MetaDataCreateIndexService(Settings settings, ClusterService clusterServi\n this.activeShardsObserver = new ActiveShardsObserver(settings, clusterService, threadPool);\n }\n \n- public void validateIndexName(String index, ClusterState state) {\n+ public static void validateIndexName(String index, ClusterState state) {\n if (state.routingTable().hasIndex(index)) {\n throw new IndexAlreadyExistsException(state.routingTable().index(index).getIndex());\n }\n@@ -157,8 +157,8 @@ public void validateIndexName(String index, ClusterState state) {\n if (index.contains(\"#\")) {\n throw new InvalidIndexNameException(index, \"must not contain '#'\");\n }\n- if (index.charAt(0) == '_') {\n- throw new InvalidIndexNameException(index, \"must not start with '_'\");\n+ if (index.charAt(0) == '_' || index.charAt(0) == '-' || index.charAt(0) == '+') {\n+ throw new InvalidIndexNameException(index, \"must not start with '_', '-', or '+'\");\n }\n if (!index.toLowerCase(Locale.ROOT).equals(index)) {\n throw new InvalidIndexNameException(index, \"must be lowercase\");", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -257,7 +257,7 @@ public ClusterState execute(ClusterState currentState) {\n if (currentIndexMetaData == null) {\n // Index doesn't exist - create it and start recovery\n // Make sure that the index we are about to create has a validate name\n- createIndexService.validateIndexName(renamedIndexName, currentState);\n+ MetaDataCreateIndexService.validateIndexName(renamedIndexName, currentState);\n createIndexService.validateIndexSettings(renamedIndexName, snapshotIndexMetaData.getSettings());\n IndexMetaData.Builder indexMdBuilder = IndexMetaData.builder(snapshotIndexMetaData).state(IndexMetaData.State.OPEN).index(renamedIndexName);\n indexMdBuilder.settings(Settings.builder().put(snapshotIndexMetaData.getSettings()).put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID()));", "filename": "core/src/main/java/org/elasticsearch/snapshots/RestoreService.java", "status": "modified" }, { "diff": "@@ -191,7 +191,9 @@ public void testValidateIndexName() throws Exception {\n \n validateIndexName(\"index#name\", \"must not contain '#'\");\n \n- validateIndexName(\"_indexname\", \"must not start with '_'\");\n+ validateIndexName(\"_indexname\", \"must not start with '_', '-', or '+'\");\n+ validateIndexName(\"-indexname\", \"must not start with '_', '-', or '+'\");\n+ validateIndexName(\"+indexname\", \"must not start with '_', '-', or '+'\");\n \n validateIndexName(\"INDEXNAME\", \"must be lowercase\");\n \n@@ -201,7 +203,7 @@ public void testValidateIndexName() throws Exception {\n \n private void validateIndexName(String indexName, String errorMessage) {\n InvalidIndexNameException e = expectThrows(InvalidIndexNameException.class,\n- () -> getCreateIndexService().validateIndexName(indexName, ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING\n+ () -> MetaDataCreateIndexService.validateIndexName(indexName, ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING\n .getDefault(Settings.EMPTY)).build()));\n assertThat(e.getMessage(), endsWith(errorMessage));\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexServiceTests.java", "status": "modified" }, { "diff": "@@ -49,3 +49,11 @@ CPU usage can be obtained from `OsStats.Cpu#getPercent`.\n \n Suggest stats exposed through `suggest` in indices stats has been merged\n with `search` stats. `suggest` stats is exposed as part of `search` stats.\n+\n+==== Creating indices starting with '-' or '+'\n+\n+Elasticsearch no longer allows indices to be created started with '-' or '+', so\n+that the multi-index matching and expansion is not confused. It was previously\n+possible (but a really bad idea) to create indices starting with a hyphen or\n+plus sign. Any index already existing with these preceding characters will\n+continue to work normally.", "filename": "docs/reference/migration/migrate_5_0/index-apis.asciidoc", "status": "modified" } ] }
{ "body": "We recently moved the registration of named writeables to pull approach. The task status named writeables were left out of this refactoring and should be migrated as well. As a result, transport client doesn't see the `BulkByScrollTask.Status` as its registration happens in `NetworkModule` after the transport client pulled all the named writeables during its initialization.\n\nI guess we want to create a TaskPlugin interface like we have done with SearchPlugin, ActionPlugin etc.\n", "comments": [ { "body": "This can be tested by removing the @ClusterScope annotation from ReindexTestCase, which currently disables transport client in all of the reindex integration tests. One of the problems that we would have caught using transport client is #19977, another one is this issue. Once we have resolved this we can enable the transport client in those tests.\n", "created_at": "2016-08-12T18:25:49Z" }, { "body": "> @ClusterScope\n\nSo I added that annotation because it was on the test case from that I cribbed the module test from. I can't remember which one it was, but I expect we use this annotation all over the place for dubious reasons. Maybe we should remove go through the uses and prune them?\n", "created_at": "2016-08-12T18:49:00Z" }, { "body": "> TaskPlugin\n\nI'd put the method in ActionPlugin because tasks are always paired with some action. There is also a `registerTaskStatus` which maybe can move in there too....\n", "created_at": "2016-08-12T18:51:34Z" }, { "body": "> I can't remember which one it was, but I expect we use this annotation all over the place for dubious reasons. Maybe we should remove go through the uses and prune them?\n\nI had a quick look, and I don't see that many usages, most seem legit, but sure we should probably go over the tests that use it and re-evaluate.\n", "created_at": "2016-08-12T19:03:54Z" }, { "body": "I can fix this on Monday if no one else claims it first.\n", "created_at": "2016-08-13T13:22:28Z" }, { "body": "Claimed. I'll work on it this morning.\n", "created_at": "2016-08-15T13:38:59Z" } ], "number": 19979, "title": "Reindex cancel and status don't work via transport client" }
{ "body": "Fixes reindex's status requests through the transport client and reworks `ListTasksResponse` so its `toString` isn't breaky with the transport client.\n\nReworks #19773\nCloses #19979\n", "number": 19997, "review_comments": [ { "body": "No code was added to this test class, only removed, so is this added import necessary?\n", "created_at": "2016-08-16T13:47:14Z" }, { "body": "Nevermind, I see, it was a remnant from the imports up top, just got shifted around\n", "created_at": "2016-08-16T13:47:41Z" } ], "title": "Fix reindex with transport client" }
{ "commits": [ { "message": "Fix reindex under the transport client\n\nThe big change here is cleaning up the `TaskListResponse` so it doesn't\nhave a breaky `toString` implementation. That was causing the reindex\ntests to break.\n\nAlso removed `NetworkModule#registerTaskStatus` which is part of the\nPlugin API. Use `Plugin#getNamedWriteables` instead." } ], "files": [ { "diff": "@@ -22,7 +22,6 @@\n import org.elasticsearch.action.FailedNodeException;\n import org.elasticsearch.action.TaskOperationFailure;\n import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;\n-import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.tasks.TaskInfo;\n \n import java.util.List;\n@@ -35,13 +34,8 @@ public class CancelTasksResponse extends ListTasksResponse {\n public CancelTasksResponse() {\n }\n \n- public CancelTasksResponse(DiscoveryNodes discoveryNodes) {\n- super(discoveryNodes);\n- }\n-\n public CancelTasksResponse(List<TaskInfo> tasks, List<TaskOperationFailure> taskFailures, List<? extends FailedNodeException>\n- nodeFailures, DiscoveryNodes discoveryNodes) {\n- super(tasks, taskFailures, nodeFailures, discoveryNodes);\n+ nodeFailures) {\n+ super(tasks, taskFailures, nodeFailures);\n }\n-\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/CancelTasksResponse.java", "status": "modified" }, { "diff": "@@ -66,7 +66,7 @@ public TransportCancelTasksAction(Settings settings, ThreadPool threadPool, Clus\n TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver\n indexNameExpressionResolver) {\n super(settings, CancelTasksAction.NAME, threadPool, clusterService, transportService, actionFilters,\n- indexNameExpressionResolver, CancelTasksRequest::new, () -> new CancelTasksResponse(clusterService.state().nodes()),\n+ indexNameExpressionResolver, CancelTasksRequest::new, CancelTasksResponse::new,\n ThreadPool.Names.MANAGEMENT);\n transportService.registerRequestHandler(BAN_PARENT_ACTION_NAME, BanParentTaskRequest::new, ThreadPool.Names.SAME, new\n BanParentRequestHandler());\n@@ -75,7 +75,7 @@ public TransportCancelTasksAction(Settings settings, ThreadPool threadPool, Clus\n @Override\n protected CancelTasksResponse newResponse(CancelTasksRequest request, List<TaskInfo> tasks, List<TaskOperationFailure>\n taskOperationFailures, List<FailedNodeException> failedNodeExceptions) {\n- return new CancelTasksResponse(tasks, taskOperationFailures, failedNodeExceptions, clusterService.state().nodes());\n+ return new CancelTasksResponse(tasks, taskOperationFailures, failedNodeExceptions);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java", "status": "modified" }, { "diff": "@@ -51,21 +51,14 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {\n \n private List<TaskGroup> groups;\n \n- private final DiscoveryNodes discoveryNodes;\n-\n public ListTasksResponse() {\n- this(null, null, null, null);\n- }\n-\n- public ListTasksResponse(DiscoveryNodes discoveryNodes) {\n- this(null, null, null, discoveryNodes);\n+ this(null, null, null);\n }\n \n public ListTasksResponse(List<TaskInfo> tasks, List<TaskOperationFailure> taskFailures,\n- List<? extends FailedNodeException> nodeFailures, DiscoveryNodes discoveryNodes) {\n+ List<? extends FailedNodeException> nodeFailures) {\n super(taskFailures, nodeFailures);\n this.tasks = tasks == null ? Collections.emptyList() : Collections.unmodifiableList(new ArrayList<>(tasks));\n- this.discoveryNodes = discoveryNodes;\n }\n \n @Override\n@@ -90,6 +83,9 @@ public Map<String, List<TaskInfo>> getPerNodeTasks() {\n return perNodeTasks;\n }\n \n+ /**\n+ * Get the tasks found by this request grouped by parent tasks.\n+ */\n public List<TaskGroup> getTaskGroups() {\n if (groups == null) {\n buildTaskGroups();\n@@ -125,12 +121,76 @@ private void buildTaskGroups() {\n this.groups = Collections.unmodifiableList(topLevelTasks.stream().map(TaskGroup.Builder::build).collect(Collectors.toList()));\n }\n \n+ /**\n+ * Get the tasks found by this request.\n+ */\n public List<TaskInfo> getTasks() {\n return tasks;\n }\n \n+ /**\n+ * Convert this task response to XContent grouping by executing nodes.\n+ */\n+ public XContentBuilder toXContentGroupedByNode(XContentBuilder builder, Params params, DiscoveryNodes discoveryNodes)\n+ throws IOException {\n+ toXContentCommon(builder, params);\n+ builder.startObject(\"nodes\");\n+ for (Map.Entry<String, List<TaskInfo>> entry : getPerNodeTasks().entrySet()) {\n+ DiscoveryNode node = discoveryNodes.get(entry.getKey());\n+ builder.startObject(entry.getKey());\n+ if (node != null) {\n+ // If the node is no longer part of the cluster, oh well, we'll just skip it's useful information.\n+ builder.field(\"name\", node.getName());\n+ builder.field(\"transport_address\", node.getAddress().toString());\n+ builder.field(\"host\", node.getHostName());\n+ builder.field(\"ip\", node.getAddress());\n+\n+ builder.startArray(\"roles\");\n+ for (DiscoveryNode.Role role : node.getRoles()) {\n+ builder.value(role.getRoleName());\n+ }\n+ builder.endArray();\n+\n+ if (!node.getAttributes().isEmpty()) {\n+ builder.startObject(\"attributes\");\n+ for (Map.Entry<String, String> attrEntry : node.getAttributes().entrySet()) {\n+ builder.field(attrEntry.getKey(), attrEntry.getValue());\n+ }\n+ builder.endObject();\n+ }\n+ }\n+ builder.startObject(\"tasks\");\n+ for(TaskInfo task : entry.getValue()) {\n+ builder.field(task.getTaskId().toString());\n+ task.toXContent(builder, params);\n+ }\n+ builder.endObject();\n+ builder.endObject();\n+ }\n+ builder.endObject();\n+ return builder;\n+ }\n+\n+ /**\n+ * Convert this response to XContent grouping by parent tasks.\n+ */\n+ public XContentBuilder toXContentGroupedByParents(XContentBuilder builder, Params params) throws IOException {\n+ toXContentCommon(builder, params);\n+ builder.startObject(\"tasks\");\n+ for (TaskGroup group : getTaskGroups()) {\n+ builder.field(group.getTaskInfo().getTaskId().toString());\n+ group.toXContent(builder, params);\n+ }\n+ builder.endObject();\n+ return builder;\n+ }\n+\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ return toXContentGroupedByParents(builder, params);\n+ }\n+\n+ private void toXContentCommon(XContentBuilder builder, Params params) throws IOException {\n if (getTaskFailures() != null && getTaskFailures().size() > 0) {\n builder.startArray(\"task_failures\");\n for (TaskOperationFailure ex : getTaskFailures()){\n@@ -150,51 +210,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n builder.endArray();\n }\n- String groupBy = params.param(\"group_by\", \"nodes\");\n- if (\"nodes\".equals(groupBy)) {\n- builder.startObject(\"nodes\");\n- for (Map.Entry<String, List<TaskInfo>> entry : getPerNodeTasks().entrySet()) {\n- DiscoveryNode node = discoveryNodes.get(entry.getKey());\n- builder.startObject(entry.getKey());\n- if (node != null) {\n- // If the node is no longer part of the cluster, oh well, we'll just skip it's useful information.\n- builder.field(\"name\", node.getName());\n- builder.field(\"transport_address\", node.getAddress().toString());\n- builder.field(\"host\", node.getHostName());\n- builder.field(\"ip\", node.getAddress());\n-\n- builder.startArray(\"roles\");\n- for (DiscoveryNode.Role role : node.getRoles()) {\n- builder.value(role.getRoleName());\n- }\n- builder.endArray();\n-\n- if (!node.getAttributes().isEmpty()) {\n- builder.startObject(\"attributes\");\n- for (Map.Entry<String, String> attrEntry : node.getAttributes().entrySet()) {\n- builder.field(attrEntry.getKey(), attrEntry.getValue());\n- }\n- builder.endObject();\n- }\n- }\n- builder.startObject(\"tasks\");\n- for(TaskInfo task : entry.getValue()) {\n- builder.field(task.getTaskId().toString());\n- task.toXContent(builder, params);\n- }\n- builder.endObject();\n- builder.endObject();\n- }\n- builder.endObject();\n- } else if (\"parents\".equals(groupBy)) {\n- builder.startObject(\"tasks\");\n- for (TaskGroup group : getTaskGroups()) {\n- builder.field(group.getTaskInfo().getTaskId().toString());\n- group.toXContent(builder, params);\n- }\n- builder.endObject();\n- }\n- return builder;\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java", "status": "modified" }, { "diff": "@@ -56,15 +56,14 @@ public static long waitForCompletionTimeout(TimeValue timeout) {\n @Inject\n public TransportListTasksAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, ListTasksAction.NAME, threadPool, clusterService, transportService, actionFilters,\n- indexNameExpressionResolver, ListTasksRequest::new, () -> new ListTasksResponse(clusterService.state().nodes()),\n- ThreadPool.Names.MANAGEMENT);\n+ super(settings, ListTasksAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,\n+ ListTasksRequest::new, ListTasksResponse::new, ThreadPool.Names.MANAGEMENT);\n }\n \n @Override\n protected ListTasksResponse newResponse(ListTasksRequest request, List<TaskInfo> tasks,\n List<TaskOperationFailure> taskOperationFailures, List<FailedNodeException> failedNodeExceptions) {\n- return new ListTasksResponse(tasks, taskOperationFailures, failedNodeExceptions, clusterService.state().nodes());\n+ return new ListTasksResponse(tasks, taskOperationFailures, failedNodeExceptions);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java", "status": "modified" }, { "diff": "@@ -19,9 +19,6 @@\n \n package org.elasticsearch.common.network;\n \n-import java.util.ArrayList;\n-import java.util.List;\n-\n import org.elasticsearch.action.support.replication.ReplicationTask;\n import org.elasticsearch.cluster.routing.allocation.command.AllocateEmptyPrimaryAllocationCommand;\n import org.elasticsearch.cluster.routing.allocation.command.AllocateReplicaAllocationCommand;\n@@ -33,6 +30,7 @@\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.inject.util.Providers;\n+import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry.Entry;\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.settings.Setting;\n@@ -47,6 +45,9 @@\n import org.elasticsearch.transport.TransportService;\n import org.elasticsearch.transport.local.LocalTransport;\n \n+import java.util.ArrayList;\n+import java.util.List;\n+\n /**\n * A module to handle registering and binding all network related classes.\n */\n@@ -76,11 +77,11 @@ public class NetworkModule extends AbstractModule {\n private final ExtensionPoint.SelectedType<TransportService> transportServiceTypes = new ExtensionPoint.SelectedType<>(\"transport_service\", TransportService.class);\n private final ExtensionPoint.SelectedType<Transport> transportTypes = new ExtensionPoint.SelectedType<>(\"transport\", Transport.class);\n private final ExtensionPoint.SelectedType<HttpServerTransport> httpTransportTypes = new ExtensionPoint.SelectedType<>(\"http_transport\", HttpServerTransport.class);\n- private final List<Entry> namedWriteables = new ArrayList<>();\n+ private final List<NamedWriteableRegistry.Entry> namedWriteables = new ArrayList<>();\n \n /**\n * Creates a network module that custom networking classes can be plugged into.\n- * @param networkService A constructed network service object to bind.\n+ * @param networkService A constructed network service object to bind.\n * @param settings The settings for the node\n * @param transportClient True if only transport classes should be allowed to be registered, false otherwise.\n */\n@@ -90,8 +91,8 @@ public NetworkModule(NetworkService networkService, Settings settings, boolean t\n this.transportClient = transportClient;\n registerTransportService(\"default\", TransportService.class);\n registerTransport(LOCAL_TRANSPORT, LocalTransport.class);\n- registerTaskStatus(ReplicationTask.Status.NAME, ReplicationTask.Status::new);\n- registerTaskStatus(RawTaskStatus.NAME, RawTaskStatus::new);\n+ namedWriteables.add(new NamedWriteableRegistry.Entry(Task.Status.class, ReplicationTask.Status.NAME, ReplicationTask.Status::new));\n+ namedWriteables.add(new NamedWriteableRegistry.Entry(Task.Status.class, RawTaskStatus.NAME, RawTaskStatus::new));\n registerBuiltinAllocationCommands();\n }\n \n@@ -118,10 +119,6 @@ public void registerHttpTransport(String name, Class<? extends HttpServerTranspo\n httpTransportTypes.registerExtension(name, clazz);\n }\n \n- public void registerTaskStatus(String name, Writeable.Reader<? extends Task.Status> reader) {\n- namedWriteables.add(new Entry(Task.Status.class, name, reader));\n- }\n-\n /**\n * Register an allocation command.\n * <p>", "filename": "core/src/main/java/org/elasticsearch/common/network/NetworkModule.java", "status": "modified" }, { "diff": "@@ -19,9 +19,7 @@\n \n package org.elasticsearch.rest.action.admin.cluster;\n \n-import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequest;\n-import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksResponse;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Strings;\n@@ -31,11 +29,10 @@\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n-import org.elasticsearch.rest.action.RestToXContentListener;\n import org.elasticsearch.tasks.TaskId;\n \n import static org.elasticsearch.rest.RestRequest.Method.POST;\n-import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.nodeSettingListener;\n+import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.listTasksResponseListener;\n \n \n public class RestCancelTasksAction extends BaseRestHandler {\n@@ -61,8 +58,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n cancelTasksRequest.setNodesIds(nodesIds);\n cancelTasksRequest.setActions(actions);\n cancelTasksRequest.setParentTaskId(parentTaskId);\n- ActionListener<CancelTasksResponse> listener = nodeSettingListener(clusterService, new RestToXContentListener<>(channel));\n- client.admin().cluster().cancelTasks(cancelTasksRequest, listener);\n+ client.admin().cluster().cancelTasks(cancelTasksRequest, listTasksResponseListener(clusterService, channel));\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCancelTasksAction.java", "status": "modified" }, { "diff": "@@ -28,10 +28,15 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.rest.BaseRestHandler;\n+import org.elasticsearch.rest.BytesRestResponse;\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n+import org.elasticsearch.rest.RestResponse;\n+import org.elasticsearch.rest.RestStatus;\n+import org.elasticsearch.rest.action.RestBuilderListener;\n import org.elasticsearch.rest.action.RestToXContentListener;\n import org.elasticsearch.tasks.TaskId;\n \n@@ -68,27 +73,30 @@ public static ListTasksRequest generateListTasksRequest(RestRequest request) {\n \n @Override\n public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) {\n- ActionListener<ListTasksResponse> listener = nodeSettingListener(clusterService, new RestToXContentListener<>(channel));\n- client.admin().cluster().listTasks(generateListTasksRequest(request), listener);\n+ client.admin().cluster().listTasks(generateListTasksRequest(request), listTasksResponseListener(clusterService, channel));\n }\n \n /**\n- * Wrap the normal channel listener in one that sets the discovery nodes on the response so we can support all of it's toXContent\n- * formats.\n+ * Standard listener for extensions of {@link ListTasksResponse} that supports {@code group_by=nodes}.\n */\n- public static <T extends ListTasksResponse> ActionListener<T> nodeSettingListener(ClusterService clusterService,\n- ActionListener<T> channelListener) {\n- return new ActionListener<T>() {\n- @Override\n- public void onResponse(T response) {\n- channelListener.onResponse(response);\n- }\n-\n- @Override\n- public void onFailure(Exception e) {\n- channelListener.onFailure(e);\n- }\n- };\n+ public static <T extends ListTasksResponse> ActionListener<T> listTasksResponseListener(ClusterService clusterService,\n+ RestChannel channel) {\n+ String groupBy = channel.request().param(\"group_by\", \"nodes\");\n+ if (\"nodes\".equals(groupBy)) {\n+ return new RestBuilderListener<T>(channel) {\n+ @Override\n+ public RestResponse buildResponse(T response, XContentBuilder builder) throws Exception {\n+ builder.startObject();\n+ response.toXContentGroupedByNode(builder, channel.request(), clusterService.state().nodes());\n+ builder.endObject();\n+ return new BytesRestResponse(RestStatus.OK, builder);\n+ }\n+ };\n+ } else if (\"parents\".equals(groupBy)) {\n+ return new RestToXContentListener<>(channel);\n+ } else {\n+ throw new IllegalArgumentException(\"[group_by] must be one of [nodes] or [parents] but was [\" + groupBy + \"]\");\n+ }\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestListTasksAction.java", "status": "modified" }, { "diff": "@@ -47,6 +47,7 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.rest.action.admin.cluster.RestListTasksAction;\n import org.elasticsearch.tasks.Task;\n import org.elasticsearch.tasks.TaskId;\n import org.elasticsearch.tasks.TaskInfo;\n@@ -65,6 +66,7 @@\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicReference;\n+import java.util.function.Consumer;\n \n import static org.elasticsearch.action.support.PlainActionFuture.newFuture;\n import static org.hamcrest.Matchers.containsString;\n@@ -736,7 +738,7 @@ public void testTasksToXContentGrouping() throws Exception {\n ListTasksResponse response = testNodes[0].transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(testNodes.length + 1, response.getTasks().size());\n \n- Map<String, Object> byNodes = serialize(response, new ToXContent.MapParams(Collections.singletonMap(\"group_by\", \"nodes\")));\n+ Map<String, Object> byNodes = serialize(response, true);\n byNodes = (Map<String, Object>) byNodes.get(\"nodes\");\n // One element on the top level\n assertEquals(testNodes.length, byNodes.size());\n@@ -750,7 +752,7 @@ public void testTasksToXContentGrouping() throws Exception {\n }\n \n // Group by parents\n- Map<String, Object> byParent = serialize(response, new ToXContent.MapParams(Collections.singletonMap(\"group_by\", \"parents\")));\n+ Map<String, Object> byParent = serialize(response, false);\n byParent = (Map<String, Object>) byParent.get(\"tasks\");\n // One element on the top level\n assertEquals(1, byParent.size()); // Only one top level task\n@@ -763,10 +765,15 @@ public void testTasksToXContentGrouping() throws Exception {\n }\n }\n \n- private Map<String, Object> serialize(ToXContent response, ToXContent.Params params) throws IOException {\n+ private Map<String, Object> serialize(ListTasksResponse response, boolean byParents) throws IOException {\n XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);\n builder.startObject();\n- response.toXContent(builder, params);\n+ if (byParents) {\n+ DiscoveryNodes nodes = testNodes[0].clusterService.state().nodes();\n+ response.toXContentGroupedByNode(builder, ToXContent.EMPTY_PARAMS, nodes);\n+ } else {\n+ response.toXContentGroupedByParents(builder, ToXContent.EMPTY_PARAMS);\n+ }\n builder.endObject();\n builder.flush();\n logger.info(builder.string());", "filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TransportTasksActionTests.java", "status": "modified" }, { "diff": "@@ -19,20 +19,12 @@\n \n package org.elasticsearch.common.network;\n \n-import java.io.IOException;\n-import java.util.Collections;\n-\n-import org.elasticsearch.action.support.replication.ReplicationTask;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.common.Table;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.ModuleTestCase;\n-import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n-import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.BoundTransportAddress;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.http.HttpInfo;\n import org.elasticsearch.http.HttpServerAdapter;\n import org.elasticsearch.http.HttpServerTransport;\n@@ -41,11 +33,12 @@\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.rest.action.cat.AbstractCatAction;\n-import org.elasticsearch.tasks.Task;\n import org.elasticsearch.test.transport.AssertingLocalTransport;\n import org.elasticsearch.transport.Transport;\n import org.elasticsearch.transport.TransportService;\n \n+import java.util.Collections;\n+\n public class NetworkModuleTests extends ModuleTestCase {\n \n static class FakeTransportService extends TransportService {\n@@ -168,40 +161,4 @@ public void testRegisterHttpTransport() {\n assertNotBound(module, HttpServerTransport.class);\n assertFalse(module.isTransportClient());\n }\n-\n- public void testRegisterTaskStatus() {\n- Settings settings = Settings.EMPTY;\n- NetworkModule module = new NetworkModule(new NetworkService(settings, Collections.emptyList()), settings, false);\n- NamedWriteableRegistry registry = new NamedWriteableRegistry(module.getNamedWriteables());\n- assertFalse(module.isTransportClient());\n-\n- // Builtin reader comes back\n- assertNotNull(registry.getReader(Task.Status.class, ReplicationTask.Status.NAME));\n-\n- module.registerTaskStatus(DummyTaskStatus.NAME, DummyTaskStatus::new);\n- assertTrue(module.getNamedWriteables().stream().anyMatch(x -> x.name.equals(DummyTaskStatus.NAME)));\n- }\n-\n- private class DummyTaskStatus implements Task.Status {\n- public static final String NAME = \"dummy\";\n-\n- public DummyTaskStatus(StreamInput in) {\n- throw new UnsupportedOperationException(\"test\");\n- }\n-\n- @Override\n- public String getWriteableName() {\n- return NAME;\n- }\n-\n- @Override\n- public void writeTo(StreamOutput out) throws IOException {\n- throw new UnsupportedOperationException();\n- }\n-\n- @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- throw new UnsupportedOperationException();\n- }\n- }\n }", "filename": "core/src/test/java/org/elasticsearch/common/network/NetworkModuleTests.java", "status": "modified" }, { "diff": "@@ -20,29 +20,23 @@\n package org.elasticsearch.tasks;\n \n import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;\n-import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.test.ESTestCase;\n-import org.hamcrest.Matchers;\n \n-import java.util.Collections;\n+import static java.util.Collections.emptyList;\n+import static java.util.Collections.singletonList;\n \n public class ListTasksResponseTests extends ESTestCase {\n \n- public void testToStringNoTask() {\n- ListTasksResponse tasksResponse = new ListTasksResponse();\n- String string = tasksResponse.toString();\n- assertThat(string, Matchers.containsString(\"nodes\"));\n+ public void testEmptyToString() {\n+ assertEquals(\"{\\\"tasks\\\":{}}\", new ListTasksResponse().toString());\n }\n \n- public void testToString() {\n+ public void testNonEmptyToString() {\n TaskInfo info = new TaskInfo(\n new TaskId(\"node1\", 1), \"dummy-type\", \"dummy-action\", \"dummy-description\", null, 0, 1, true, new TaskId(\"node1\", 0));\n-\n- DiscoveryNodes nodes = DiscoveryNodes.builder().build();\n- ListTasksResponse tasksResponse = new ListTasksResponse(Collections.singletonList(info), Collections.emptyList(),\n- Collections.emptyList(), nodes);\n-\n- String string = tasksResponse.toString();\n- assertThat(string, Matchers.containsString(\"\\\"type\\\":\\\"dummy-type\\\"\"));\n+ ListTasksResponse tasksResponse = new ListTasksResponse(singletonList(info), emptyList(), emptyList());\n+ assertEquals(\"{\\\"tasks\\\":{\\\"node1:1\\\":{\\\"node\\\":\\\"node1\\\",\\\"id\\\":1,\\\"type\\\":\\\"dummy-type\\\",\\\"action\\\":\\\"dummy-action\\\",\"\n+ + \"\\\"description\\\":\\\"dummy-description\\\",\\\"start_time_in_millis\\\":0,\\\"running_time_in_nanos\\\":1,\\\"cancellable\\\":true,\"\n+ + \"\\\"parent_task_id\\\":\\\"node1:0\\\"}}}\", tasksResponse.toString());\n }\n }", "filename": "core/src/test/java/org/elasticsearch/tasks/ListTasksResponseTests.java", "status": "modified" }, { "diff": "@@ -21,11 +21,12 @@\n \n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.ActionResponse;\n-import org.elasticsearch.common.network.NetworkModule;\n-import org.elasticsearch.plugins.ActionPlugin;\n+import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n import org.elasticsearch.common.settings.Setting;\n+import org.elasticsearch.plugins.ActionPlugin;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.rest.RestHandler;\n+import org.elasticsearch.tasks.Task;\n \n import java.util.Arrays;\n import java.util.List;\n@@ -43,16 +44,18 @@ public class ReindexPlugin extends Plugin implements ActionPlugin {\n new ActionHandler<>(RethrottleAction.INSTANCE, TransportRethrottleAction.class));\n }\n \n+ @Override\n+ public List<NamedWriteableRegistry.Entry> getNamedWriteables() {\n+ return singletonList(\n+ new NamedWriteableRegistry.Entry(Task.Status.class, BulkByScrollTask.Status.NAME, BulkByScrollTask.Status::new));\n+ }\n+\n @Override\n public List<Class<? extends RestHandler>> getRestHandlers() {\n return Arrays.asList(RestReindexAction.class, RestUpdateByQueryAction.class, RestDeleteByQueryAction.class,\n RestRethrottleAction.class);\n }\n \n- public void onModule(NetworkModule networkModule) {\n- networkModule.registerTaskStatus(BulkByScrollTask.Status.NAME, BulkByScrollTask.Status::new);\n- }\n-\n @Override\n public List<Setting<?>> getSettings() {\n return singletonList(TransportReindexAction.REMOTE_CLUSTER_WHITELIST);", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexPlugin.java", "status": "modified" }, { "diff": "@@ -19,8 +19,6 @@\n \n package org.elasticsearch.index.reindex;\n \n-import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.inject.Inject;\n@@ -29,11 +27,10 @@\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n-import org.elasticsearch.rest.action.RestToXContentListener;\n import org.elasticsearch.tasks.TaskId;\n \n import static org.elasticsearch.rest.RestRequest.Method.POST;\n-import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.nodeSettingListener;\n+import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.listTasksResponseListener;\n \n public class RestRethrottleAction extends BaseRestHandler {\n private final ClusterService clusterService;\n@@ -56,7 +53,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n throw new IllegalArgumentException(\"requests_per_second is a required parameter\");\n }\n internalRequest.setRequestsPerSecond(requestsPerSecond);\n- ActionListener<ListTasksResponse> listener = nodeSettingListener(clusterService, new RestToXContentListener<>(channel));\n- client.execute(RethrottleAction.INSTANCE, internalRequest, listener);\n+ client.execute(RethrottleAction.INSTANCE, internalRequest, listTasksResponseListener(clusterService, channel));\n }\n }", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestRethrottleAction.java", "status": "modified" }, { "diff": "@@ -40,9 +40,8 @@ public class TransportRethrottleAction extends TransportTasksAction<BulkByScroll\n @Inject\n public TransportRethrottleAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, RethrottleAction.NAME, threadPool, clusterService, transportService, actionFilters,\n- indexNameExpressionResolver, RethrottleRequest::new, () -> new ListTasksResponse(clusterService.state().nodes()),\n- ThreadPool.Names.MANAGEMENT);\n+ super(settings, RethrottleAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,\n+ RethrottleRequest::new, ListTasksResponse::new, ThreadPool.Names.MANAGEMENT);\n }\n \n @Override\n@@ -60,7 +59,7 @@ protected TaskInfo readTaskResponse(StreamInput in) throws IOException {\n @Override\n protected ListTasksResponse newResponse(RethrottleRequest request, List<TaskInfo> tasks,\n List<TaskOperationFailure> taskOperationFailures, List<FailedNodeException> failedNodeExceptions) {\n- return new ListTasksResponse(tasks, taskOperationFailures, failedNodeExceptions, clusterService.state().nodes());\n+ return new ListTasksResponse(tasks, taskOperationFailures, failedNodeExceptions);\n }\n \n @Override", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportRethrottleAction.java", "status": "modified" }, { "diff": "@@ -28,14 +28,21 @@\n \n import static org.elasticsearch.test.ESIntegTestCase.Scope.SUITE;\n \n-@ClusterScope(scope = SUITE, transportClientRatio = 0)\n+/**\n+ * Base test case for integration tests against the reindex plugin.\n+ */\n+@ClusterScope(scope = SUITE)\n public abstract class ReindexTestCase extends ESIntegTestCase {\n-\n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n return Arrays.asList(ReindexPlugin.class);\n }\n \n+ @Override\n+ protected Collection<Class<? extends Plugin>> transportClientPlugins() {\n+ return Arrays.asList(ReindexPlugin.class);\n+ }\n+\n protected ReindexRequestBuilder reindex() {\n return ReindexAction.INSTANCE.newRequestBuilder(client());\n }", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexTestCase.java", "status": "modified" } ] }
{ "body": "We just overwrite `toString()` method so it calls toXContent\nwith `group_by` = \"whatever\" so we don't try to group by nodes\nwhich does not make sense in a toString() method.\n\nWe keep the old behavior for `toXContent()` method which\nmeans that there is no impact in the REST layer but\nonly in logs and tests (where we call `toString()`).\n\nCloses #19772.\n", "comments": [ { "body": "@dadoonet I think this change doesn't fix the issue but if prevents it from happening? I don't think there should be a state where we can't call toXContent in ListTasksResponse. So I think we should not merge this change. Rather open an issue and we try to fix the ListTasksResponse class to not have this invariant at all?\n", "created_at": "2016-08-03T10:23:09Z" }, { "body": "I was thinking the right way to fix this is to make the toXContent version\nthat requires the extra state a separate method that takes the state. If we\nwant it we can use it in the rest handler but for things like toString we\nuse the other version. No more member variable to set.\n\nI'm sorry for screwing the class up like that in the first place.\n\nOn Aug 3, 2016 7:16 AM, \"Simon Willnauer\" notifications@github.com wrote:\n\n> @dadoonet https://github.com/dadoonet I think this change doesn't fix\n> the issue but if prevents it from happening? I don't think there should be\n> a state where we can't call toXContent in ListTasksResponse. So I think we\n> should not merge this change. Rather open an issue and we try to fix the\n> ListTasksResponse class to not have this invariant at all?\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> https://github.com/elastic/elasticsearch/pull/19773#issuecomment-237200406,\n> or mute the thread\n> https://github.com/notifications/unsubscribe-auth/AANLooWS5TpGOxCD5YElkk3mshkF3ZLNks5qcG-dgaJpZM4Jbg7W\n> .\n", "created_at": "2016-08-03T11:53:44Z" }, { "body": "Thanks @nik9000! Just saw your comment after I pushed new changes.\nIndeed. I'm going to fix it the way you said. Much better than what I did so far.\n", "created_at": "2016-08-03T12:08:02Z" }, { "body": "> I'm going to fix it the way you said. Much better than what I did so far.\n\nAnd it only took me two months to come up with!\n", "created_at": "2016-08-03T12:11:45Z" }, { "body": "@nik9000 I came with something else. I hope this what you were expecting.\nREST tests are working fine locally.\n", "created_at": "2016-08-03T12:57:44Z" }, { "body": "I really think `setDiscoveryNodes` was a mistake on my part. I was kind of hoping you'd remove it with this PR and undo my mistake.\n", "created_at": "2016-08-03T23:20:37Z" }, { "body": "@nik9000 pushed a new commit. Let me know what you think.\n", "created_at": "2016-08-04T00:03:28Z" }, { "body": "@nik9000 I updated the PR and update with latest changes in master.\nNote that I had to add `builder.startObject();` and `builder.endObject();` because this is failing now in my tests.\nNote that it was not failing before I merged master branch into this branch.\n", "created_at": "2016-08-09T10:18:00Z" }, { "body": "After running tests, adding `builder.startObject();` was actually a bad idea... Need to fix that and send another commit.\n", "created_at": "2016-08-09T10:31:22Z" }, { "body": "Well apparently, it does not come from my PR. I added a simple call to toString() in master branch: \n\n```\nIndex: core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TransportTasksActionTests.java\nIDEA additional info:\nSubsystem: com.intellij.openapi.diff.impl.patch.CharsetEP\n<+>UTF-8\n===================================================================\n--- core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TransportTasksActionTests.java (revision af5fbcddfcd927838f0fc2e8ff4c25dfc23fcf92)\n+++ core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TransportTasksActionTests.java (revision )\n@@ -755,6 +755,9 @@\n assertEquals(1, otherNode.size()); // one tasks for the all other nodes\n }\n\n+ String s = response.toString();\n+ logger.error(\"{}\", s);\n+\n // Group by parents\n Map<String, Object> byParent = serialize(response, new ToXContent.MapParams(Collections.singletonMap(\"group_by\", \"parents\")));\n byParent = (Map<String, Object>) byParent.get(\"tasks\");\n```\n\nand I'm now getting:\n\n```\n[2016-08-09 12:38:01,957][ERROR][org.elasticsearch.action.admin.cluster.node.tasks] Error building toString out of XContent: com.fasterxml.jackson.core.JsonGenerationException: Can not write a field name, expecting a value\n at com.fasterxml.jackson.core.JsonGenerator._reportError(JsonGenerator.java:1886)\n at com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeFieldName(UTF8JsonGenerator.java:185)\n at org.elasticsearch.common.xcontent.json.JsonXContentGenerator.writeFieldName(JsonXContentGenerator.java:159)\n at org.elasticsearch.common.xcontent.XContentBuilder.field(XContentBuilder.java:198)\n at org.elasticsearch.common.xcontent.XContentBuilder.startObject(XContentBuilder.java:145)\n at org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse.toXContent(ListTasksResponse.java:161)\n at org.elasticsearch.common.Strings.toString(Strings.java:901)\n at org.elasticsearch.common.Strings.toString(Strings.java:887)\n at org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse.toString(ListTasksResponse.java:208)\n at org.elasticsearch.action.admin.cluster.node.tasks.TransportTasksActionTests.testTasksToXContentGrouping(TransportTasksActionTests.java:758)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:497)\n at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)\n at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)\n at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)\n at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)\n at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)\n at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)\n at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)\n at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)\n at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)\n at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)\n at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)\n at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)\n at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)\n at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)\n at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)\n at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)\n at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)\n at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)\n at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)\n at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)\n at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)\n at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)\n at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)\n at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)\n at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)\n at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)\n at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)\n at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)\n at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)\n at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)\n at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)\n at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)\n at java.lang.Thread.run(Thread.java:745)\n```\n\n@nik9000 or others: Any idea what is causing that?\n", "created_at": "2016-08-09T10:41:31Z" }, { "body": "> Any idea what is causing that?\n\nSorry it took me so long to get back to you here! I _think_ this is caused by `ToXContent` having a super weak contract. Some implementers spit out the _contents_ of the object that they represent (anything extending from ActionResponse, QueryBuilders, etc) so they need `Strings.toString(this, true)`. Some implementers spit out a full object and they work with `String.toString(this)`.\n\nI got the sense that lots of implementers were actually the latter, but now when I looked around I keep seeing many of the former.\n\nUpshot: if you add a `toString` using `Strings.toString` you should add a test for the `toString` just to make sure it doesn't totally fail. I've not been good about that in the past.\n", "created_at": "2016-08-09T13:59:56Z" }, { "body": "@nik9000 I ran `git bisect` with a new test to find where this \"regression\" has been introduced:\n\n``` java\npublic class BisectTests extends ESTestCase {\n\n public void testToStringNoTask() {\n ListTasksResponse tasksResponse = new ListTasksResponse(Collections.emptyList(), Collections.emptyList(), Collections.emptyList());\n DiscoveryNodes nodes = DiscoveryNodes.builder().build();\n tasksResponse.setDiscoveryNodes(nodes);\n String string = tasksResponse.toString();\n assertThat(string, Matchers.containsString(\"nodes\"));\n }\n}\n```\n\nApparently this regression started from 841d5a210e12bc629336396893e18872d2b1f46f (see #18939).\n@tlrx might help then... :)\n\nAfter this commit, test gives:\n\n```\n 2> REPRODUCE WITH: gradle :core:test -Dtests.seed=D9546C36A833B4AA -Dtests.class=org.elasticsearch.tasks.BisectTests -Dtests.method=\"testToStringNoTask\" -Dtests.security.manager=true -Dtests.locale=sr-Latn-ME -Dtests.timezone=Pacific/Pohnpei\nFAILURE 0.27s | BisectTests.testToStringNoTask <<< FAILURES!\n > Throwable #1: java.lang.AssertionError: \n > Expected: a string containing \"nodes\"\n > but: was \"Error building toString out of XContent: com.fasterxml.jackson.core.JsonGenerationException: Can not write a field name, expecting a value\n > at com.fasterxml.jackson.core.JsonGenerator._reportError(JsonGenerator.java:1886)\n > at com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeFieldName(UTF8JsonGenerator.java:185)\n > at org.elasticsearch.common.xcontent.json.JsonXContentGenerator.writeFieldName(JsonXContentGenerator.java:159)\n > at org.elasticsearch.common.xcontent.XContentBuilder.field(XContentBuilder.java:198)\n > at org.elasticsearch.common.xcontent.XContentBuilder.startObject(XContentBuilder.java:145)\n > at org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse.toXContent(ListTasksResponse.java:161)\n > at org.elasticsearch.common.Strings.toString(Strings.java:901)\n > at org.elasticsearch.common.Strings.toString(Strings.java:887)\n > at org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse.toString(ListTasksResponse.java:208)\n > at org.elasticsearch.tasks.BisectTests.testToStringNoTask(BisectTests.java:35)\n > at java.lang.Thread.run(Thread.java:745)\n > \"\n > at __randomizedtesting.SeedInfo.seed([D9546C36A833B4AA:A689B7410CCE13A8]:0)\n > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)\n > at org.elasticsearch.tasks.BisectTests.testToStringNoTask(BisectTests.java:36)\n > at java.lang.Thread.run(Thread.java:745)\n```\n", "created_at": "2016-08-09T14:09:00Z" }, { "body": "> Apparently this regression started from 841d5a2 (see #18939).\n> @tlrx might help then... :)\n\nThis is not a regression, Jackson 2.8.1 is stricter and does not allow to write a field name if no object has been started (ie. startObject() not called before .field()).\n\nIn your case it comes from the ToXContent.toString() method which just outputs the ToXContent object, and your object needs to wrapped into another object (I suppose ListTasksResponse.toXContent() method starts directly by writing field name).\n\nYou can use the static method `Strings.toString(tasksResponse, true)` instead of `tasksResponse.toString()`in your test. \n", "created_at": "2016-08-09T14:20:51Z" }, { "body": "I meant it's a regression in term of our code base. `toString()` call was working before the merge of #18939 and is not working properly after that.\nWe just not detected it (lack of tests).\n\nI can confirm that using:\n\n``` java\n @Override\n public String toString() {\n return Strings.toString(this, true);\n }\n```\n\nfixes it.\n", "created_at": "2016-08-09T14:29:27Z" }, { "body": "@nik9000 Thanks to @tlrx it's fixed now. Could you review the change then?\n", "created_at": "2016-08-09T14:35:27Z" }, { "body": "> Could you review the change then?\n\nSure! Thanks.\n", "created_at": "2016-08-09T17:04:51Z" }, { "body": "Awesome! Left a small tiny suggestion for improvement. If you can get it, great. If not, fine by me. This is much better. Thanks!\n\nLGTM\n", "created_at": "2016-08-09T17:07:39Z" }, { "body": "Awesome! LGTM\n", "created_at": "2016-08-09T17:54:22Z" } ], "number": 19773, "title": "Remove ListTasksResponse#setDiscoveryNodes()" }
{ "body": "Fixes reindex's status requests through the transport client and reworks `ListTasksResponse` so its `toString` isn't breaky with the transport client.\n\nReworks #19773\nCloses #19979\n", "number": 19997, "review_comments": [ { "body": "No code was added to this test class, only removed, so is this added import necessary?\n", "created_at": "2016-08-16T13:47:14Z" }, { "body": "Nevermind, I see, it was a remnant from the imports up top, just got shifted around\n", "created_at": "2016-08-16T13:47:41Z" } ], "title": "Fix reindex with transport client" }
{ "commits": [ { "message": "Fix reindex under the transport client\n\nThe big change here is cleaning up the `TaskListResponse` so it doesn't\nhave a breaky `toString` implementation. That was causing the reindex\ntests to break.\n\nAlso removed `NetworkModule#registerTaskStatus` which is part of the\nPlugin API. Use `Plugin#getNamedWriteables` instead." } ], "files": [ { "diff": "@@ -22,7 +22,6 @@\n import org.elasticsearch.action.FailedNodeException;\n import org.elasticsearch.action.TaskOperationFailure;\n import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;\n-import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.tasks.TaskInfo;\n \n import java.util.List;\n@@ -35,13 +34,8 @@ public class CancelTasksResponse extends ListTasksResponse {\n public CancelTasksResponse() {\n }\n \n- public CancelTasksResponse(DiscoveryNodes discoveryNodes) {\n- super(discoveryNodes);\n- }\n-\n public CancelTasksResponse(List<TaskInfo> tasks, List<TaskOperationFailure> taskFailures, List<? extends FailedNodeException>\n- nodeFailures, DiscoveryNodes discoveryNodes) {\n- super(tasks, taskFailures, nodeFailures, discoveryNodes);\n+ nodeFailures) {\n+ super(tasks, taskFailures, nodeFailures);\n }\n-\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/CancelTasksResponse.java", "status": "modified" }, { "diff": "@@ -66,7 +66,7 @@ public TransportCancelTasksAction(Settings settings, ThreadPool threadPool, Clus\n TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver\n indexNameExpressionResolver) {\n super(settings, CancelTasksAction.NAME, threadPool, clusterService, transportService, actionFilters,\n- indexNameExpressionResolver, CancelTasksRequest::new, () -> new CancelTasksResponse(clusterService.state().nodes()),\n+ indexNameExpressionResolver, CancelTasksRequest::new, CancelTasksResponse::new,\n ThreadPool.Names.MANAGEMENT);\n transportService.registerRequestHandler(BAN_PARENT_ACTION_NAME, BanParentTaskRequest::new, ThreadPool.Names.SAME, new\n BanParentRequestHandler());\n@@ -75,7 +75,7 @@ public TransportCancelTasksAction(Settings settings, ThreadPool threadPool, Clus\n @Override\n protected CancelTasksResponse newResponse(CancelTasksRequest request, List<TaskInfo> tasks, List<TaskOperationFailure>\n taskOperationFailures, List<FailedNodeException> failedNodeExceptions) {\n- return new CancelTasksResponse(tasks, taskOperationFailures, failedNodeExceptions, clusterService.state().nodes());\n+ return new CancelTasksResponse(tasks, taskOperationFailures, failedNodeExceptions);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java", "status": "modified" }, { "diff": "@@ -51,21 +51,14 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {\n \n private List<TaskGroup> groups;\n \n- private final DiscoveryNodes discoveryNodes;\n-\n public ListTasksResponse() {\n- this(null, null, null, null);\n- }\n-\n- public ListTasksResponse(DiscoveryNodes discoveryNodes) {\n- this(null, null, null, discoveryNodes);\n+ this(null, null, null);\n }\n \n public ListTasksResponse(List<TaskInfo> tasks, List<TaskOperationFailure> taskFailures,\n- List<? extends FailedNodeException> nodeFailures, DiscoveryNodes discoveryNodes) {\n+ List<? extends FailedNodeException> nodeFailures) {\n super(taskFailures, nodeFailures);\n this.tasks = tasks == null ? Collections.emptyList() : Collections.unmodifiableList(new ArrayList<>(tasks));\n- this.discoveryNodes = discoveryNodes;\n }\n \n @Override\n@@ -90,6 +83,9 @@ public Map<String, List<TaskInfo>> getPerNodeTasks() {\n return perNodeTasks;\n }\n \n+ /**\n+ * Get the tasks found by this request grouped by parent tasks.\n+ */\n public List<TaskGroup> getTaskGroups() {\n if (groups == null) {\n buildTaskGroups();\n@@ -125,12 +121,76 @@ private void buildTaskGroups() {\n this.groups = Collections.unmodifiableList(topLevelTasks.stream().map(TaskGroup.Builder::build).collect(Collectors.toList()));\n }\n \n+ /**\n+ * Get the tasks found by this request.\n+ */\n public List<TaskInfo> getTasks() {\n return tasks;\n }\n \n+ /**\n+ * Convert this task response to XContent grouping by executing nodes.\n+ */\n+ public XContentBuilder toXContentGroupedByNode(XContentBuilder builder, Params params, DiscoveryNodes discoveryNodes)\n+ throws IOException {\n+ toXContentCommon(builder, params);\n+ builder.startObject(\"nodes\");\n+ for (Map.Entry<String, List<TaskInfo>> entry : getPerNodeTasks().entrySet()) {\n+ DiscoveryNode node = discoveryNodes.get(entry.getKey());\n+ builder.startObject(entry.getKey());\n+ if (node != null) {\n+ // If the node is no longer part of the cluster, oh well, we'll just skip it's useful information.\n+ builder.field(\"name\", node.getName());\n+ builder.field(\"transport_address\", node.getAddress().toString());\n+ builder.field(\"host\", node.getHostName());\n+ builder.field(\"ip\", node.getAddress());\n+\n+ builder.startArray(\"roles\");\n+ for (DiscoveryNode.Role role : node.getRoles()) {\n+ builder.value(role.getRoleName());\n+ }\n+ builder.endArray();\n+\n+ if (!node.getAttributes().isEmpty()) {\n+ builder.startObject(\"attributes\");\n+ for (Map.Entry<String, String> attrEntry : node.getAttributes().entrySet()) {\n+ builder.field(attrEntry.getKey(), attrEntry.getValue());\n+ }\n+ builder.endObject();\n+ }\n+ }\n+ builder.startObject(\"tasks\");\n+ for(TaskInfo task : entry.getValue()) {\n+ builder.field(task.getTaskId().toString());\n+ task.toXContent(builder, params);\n+ }\n+ builder.endObject();\n+ builder.endObject();\n+ }\n+ builder.endObject();\n+ return builder;\n+ }\n+\n+ /**\n+ * Convert this response to XContent grouping by parent tasks.\n+ */\n+ public XContentBuilder toXContentGroupedByParents(XContentBuilder builder, Params params) throws IOException {\n+ toXContentCommon(builder, params);\n+ builder.startObject(\"tasks\");\n+ for (TaskGroup group : getTaskGroups()) {\n+ builder.field(group.getTaskInfo().getTaskId().toString());\n+ group.toXContent(builder, params);\n+ }\n+ builder.endObject();\n+ return builder;\n+ }\n+\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ return toXContentGroupedByParents(builder, params);\n+ }\n+\n+ private void toXContentCommon(XContentBuilder builder, Params params) throws IOException {\n if (getTaskFailures() != null && getTaskFailures().size() > 0) {\n builder.startArray(\"task_failures\");\n for (TaskOperationFailure ex : getTaskFailures()){\n@@ -150,51 +210,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n builder.endArray();\n }\n- String groupBy = params.param(\"group_by\", \"nodes\");\n- if (\"nodes\".equals(groupBy)) {\n- builder.startObject(\"nodes\");\n- for (Map.Entry<String, List<TaskInfo>> entry : getPerNodeTasks().entrySet()) {\n- DiscoveryNode node = discoveryNodes.get(entry.getKey());\n- builder.startObject(entry.getKey());\n- if (node != null) {\n- // If the node is no longer part of the cluster, oh well, we'll just skip it's useful information.\n- builder.field(\"name\", node.getName());\n- builder.field(\"transport_address\", node.getAddress().toString());\n- builder.field(\"host\", node.getHostName());\n- builder.field(\"ip\", node.getAddress());\n-\n- builder.startArray(\"roles\");\n- for (DiscoveryNode.Role role : node.getRoles()) {\n- builder.value(role.getRoleName());\n- }\n- builder.endArray();\n-\n- if (!node.getAttributes().isEmpty()) {\n- builder.startObject(\"attributes\");\n- for (Map.Entry<String, String> attrEntry : node.getAttributes().entrySet()) {\n- builder.field(attrEntry.getKey(), attrEntry.getValue());\n- }\n- builder.endObject();\n- }\n- }\n- builder.startObject(\"tasks\");\n- for(TaskInfo task : entry.getValue()) {\n- builder.field(task.getTaskId().toString());\n- task.toXContent(builder, params);\n- }\n- builder.endObject();\n- builder.endObject();\n- }\n- builder.endObject();\n- } else if (\"parents\".equals(groupBy)) {\n- builder.startObject(\"tasks\");\n- for (TaskGroup group : getTaskGroups()) {\n- builder.field(group.getTaskInfo().getTaskId().toString());\n- group.toXContent(builder, params);\n- }\n- builder.endObject();\n- }\n- return builder;\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java", "status": "modified" }, { "diff": "@@ -56,15 +56,14 @@ public static long waitForCompletionTimeout(TimeValue timeout) {\n @Inject\n public TransportListTasksAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, ListTasksAction.NAME, threadPool, clusterService, transportService, actionFilters,\n- indexNameExpressionResolver, ListTasksRequest::new, () -> new ListTasksResponse(clusterService.state().nodes()),\n- ThreadPool.Names.MANAGEMENT);\n+ super(settings, ListTasksAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,\n+ ListTasksRequest::new, ListTasksResponse::new, ThreadPool.Names.MANAGEMENT);\n }\n \n @Override\n protected ListTasksResponse newResponse(ListTasksRequest request, List<TaskInfo> tasks,\n List<TaskOperationFailure> taskOperationFailures, List<FailedNodeException> failedNodeExceptions) {\n- return new ListTasksResponse(tasks, taskOperationFailures, failedNodeExceptions, clusterService.state().nodes());\n+ return new ListTasksResponse(tasks, taskOperationFailures, failedNodeExceptions);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java", "status": "modified" }, { "diff": "@@ -19,9 +19,6 @@\n \n package org.elasticsearch.common.network;\n \n-import java.util.ArrayList;\n-import java.util.List;\n-\n import org.elasticsearch.action.support.replication.ReplicationTask;\n import org.elasticsearch.cluster.routing.allocation.command.AllocateEmptyPrimaryAllocationCommand;\n import org.elasticsearch.cluster.routing.allocation.command.AllocateReplicaAllocationCommand;\n@@ -33,6 +30,7 @@\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.inject.util.Providers;\n+import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry.Entry;\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.settings.Setting;\n@@ -47,6 +45,9 @@\n import org.elasticsearch.transport.TransportService;\n import org.elasticsearch.transport.local.LocalTransport;\n \n+import java.util.ArrayList;\n+import java.util.List;\n+\n /**\n * A module to handle registering and binding all network related classes.\n */\n@@ -76,11 +77,11 @@ public class NetworkModule extends AbstractModule {\n private final ExtensionPoint.SelectedType<TransportService> transportServiceTypes = new ExtensionPoint.SelectedType<>(\"transport_service\", TransportService.class);\n private final ExtensionPoint.SelectedType<Transport> transportTypes = new ExtensionPoint.SelectedType<>(\"transport\", Transport.class);\n private final ExtensionPoint.SelectedType<HttpServerTransport> httpTransportTypes = new ExtensionPoint.SelectedType<>(\"http_transport\", HttpServerTransport.class);\n- private final List<Entry> namedWriteables = new ArrayList<>();\n+ private final List<NamedWriteableRegistry.Entry> namedWriteables = new ArrayList<>();\n \n /**\n * Creates a network module that custom networking classes can be plugged into.\n- * @param networkService A constructed network service object to bind.\n+ * @param networkService A constructed network service object to bind.\n * @param settings The settings for the node\n * @param transportClient True if only transport classes should be allowed to be registered, false otherwise.\n */\n@@ -90,8 +91,8 @@ public NetworkModule(NetworkService networkService, Settings settings, boolean t\n this.transportClient = transportClient;\n registerTransportService(\"default\", TransportService.class);\n registerTransport(LOCAL_TRANSPORT, LocalTransport.class);\n- registerTaskStatus(ReplicationTask.Status.NAME, ReplicationTask.Status::new);\n- registerTaskStatus(RawTaskStatus.NAME, RawTaskStatus::new);\n+ namedWriteables.add(new NamedWriteableRegistry.Entry(Task.Status.class, ReplicationTask.Status.NAME, ReplicationTask.Status::new));\n+ namedWriteables.add(new NamedWriteableRegistry.Entry(Task.Status.class, RawTaskStatus.NAME, RawTaskStatus::new));\n registerBuiltinAllocationCommands();\n }\n \n@@ -118,10 +119,6 @@ public void registerHttpTransport(String name, Class<? extends HttpServerTranspo\n httpTransportTypes.registerExtension(name, clazz);\n }\n \n- public void registerTaskStatus(String name, Writeable.Reader<? extends Task.Status> reader) {\n- namedWriteables.add(new Entry(Task.Status.class, name, reader));\n- }\n-\n /**\n * Register an allocation command.\n * <p>", "filename": "core/src/main/java/org/elasticsearch/common/network/NetworkModule.java", "status": "modified" }, { "diff": "@@ -19,9 +19,7 @@\n \n package org.elasticsearch.rest.action.admin.cluster;\n \n-import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequest;\n-import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksResponse;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Strings;\n@@ -31,11 +29,10 @@\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n-import org.elasticsearch.rest.action.RestToXContentListener;\n import org.elasticsearch.tasks.TaskId;\n \n import static org.elasticsearch.rest.RestRequest.Method.POST;\n-import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.nodeSettingListener;\n+import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.listTasksResponseListener;\n \n \n public class RestCancelTasksAction extends BaseRestHandler {\n@@ -61,8 +58,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n cancelTasksRequest.setNodesIds(nodesIds);\n cancelTasksRequest.setActions(actions);\n cancelTasksRequest.setParentTaskId(parentTaskId);\n- ActionListener<CancelTasksResponse> listener = nodeSettingListener(clusterService, new RestToXContentListener<>(channel));\n- client.admin().cluster().cancelTasks(cancelTasksRequest, listener);\n+ client.admin().cluster().cancelTasks(cancelTasksRequest, listTasksResponseListener(clusterService, channel));\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestCancelTasksAction.java", "status": "modified" }, { "diff": "@@ -28,10 +28,15 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.rest.BaseRestHandler;\n+import org.elasticsearch.rest.BytesRestResponse;\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n+import org.elasticsearch.rest.RestResponse;\n+import org.elasticsearch.rest.RestStatus;\n+import org.elasticsearch.rest.action.RestBuilderListener;\n import org.elasticsearch.rest.action.RestToXContentListener;\n import org.elasticsearch.tasks.TaskId;\n \n@@ -68,27 +73,30 @@ public static ListTasksRequest generateListTasksRequest(RestRequest request) {\n \n @Override\n public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) {\n- ActionListener<ListTasksResponse> listener = nodeSettingListener(clusterService, new RestToXContentListener<>(channel));\n- client.admin().cluster().listTasks(generateListTasksRequest(request), listener);\n+ client.admin().cluster().listTasks(generateListTasksRequest(request), listTasksResponseListener(clusterService, channel));\n }\n \n /**\n- * Wrap the normal channel listener in one that sets the discovery nodes on the response so we can support all of it's toXContent\n- * formats.\n+ * Standard listener for extensions of {@link ListTasksResponse} that supports {@code group_by=nodes}.\n */\n- public static <T extends ListTasksResponse> ActionListener<T> nodeSettingListener(ClusterService clusterService,\n- ActionListener<T> channelListener) {\n- return new ActionListener<T>() {\n- @Override\n- public void onResponse(T response) {\n- channelListener.onResponse(response);\n- }\n-\n- @Override\n- public void onFailure(Exception e) {\n- channelListener.onFailure(e);\n- }\n- };\n+ public static <T extends ListTasksResponse> ActionListener<T> listTasksResponseListener(ClusterService clusterService,\n+ RestChannel channel) {\n+ String groupBy = channel.request().param(\"group_by\", \"nodes\");\n+ if (\"nodes\".equals(groupBy)) {\n+ return new RestBuilderListener<T>(channel) {\n+ @Override\n+ public RestResponse buildResponse(T response, XContentBuilder builder) throws Exception {\n+ builder.startObject();\n+ response.toXContentGroupedByNode(builder, channel.request(), clusterService.state().nodes());\n+ builder.endObject();\n+ return new BytesRestResponse(RestStatus.OK, builder);\n+ }\n+ };\n+ } else if (\"parents\".equals(groupBy)) {\n+ return new RestToXContentListener<>(channel);\n+ } else {\n+ throw new IllegalArgumentException(\"[group_by] must be one of [nodes] or [parents] but was [\" + groupBy + \"]\");\n+ }\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestListTasksAction.java", "status": "modified" }, { "diff": "@@ -47,6 +47,7 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.rest.action.admin.cluster.RestListTasksAction;\n import org.elasticsearch.tasks.Task;\n import org.elasticsearch.tasks.TaskId;\n import org.elasticsearch.tasks.TaskInfo;\n@@ -65,6 +66,7 @@\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicReference;\n+import java.util.function.Consumer;\n \n import static org.elasticsearch.action.support.PlainActionFuture.newFuture;\n import static org.hamcrest.Matchers.containsString;\n@@ -736,7 +738,7 @@ public void testTasksToXContentGrouping() throws Exception {\n ListTasksResponse response = testNodes[0].transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(testNodes.length + 1, response.getTasks().size());\n \n- Map<String, Object> byNodes = serialize(response, new ToXContent.MapParams(Collections.singletonMap(\"group_by\", \"nodes\")));\n+ Map<String, Object> byNodes = serialize(response, true);\n byNodes = (Map<String, Object>) byNodes.get(\"nodes\");\n // One element on the top level\n assertEquals(testNodes.length, byNodes.size());\n@@ -750,7 +752,7 @@ public void testTasksToXContentGrouping() throws Exception {\n }\n \n // Group by parents\n- Map<String, Object> byParent = serialize(response, new ToXContent.MapParams(Collections.singletonMap(\"group_by\", \"parents\")));\n+ Map<String, Object> byParent = serialize(response, false);\n byParent = (Map<String, Object>) byParent.get(\"tasks\");\n // One element on the top level\n assertEquals(1, byParent.size()); // Only one top level task\n@@ -763,10 +765,15 @@ public void testTasksToXContentGrouping() throws Exception {\n }\n }\n \n- private Map<String, Object> serialize(ToXContent response, ToXContent.Params params) throws IOException {\n+ private Map<String, Object> serialize(ListTasksResponse response, boolean byParents) throws IOException {\n XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);\n builder.startObject();\n- response.toXContent(builder, params);\n+ if (byParents) {\n+ DiscoveryNodes nodes = testNodes[0].clusterService.state().nodes();\n+ response.toXContentGroupedByNode(builder, ToXContent.EMPTY_PARAMS, nodes);\n+ } else {\n+ response.toXContentGroupedByParents(builder, ToXContent.EMPTY_PARAMS);\n+ }\n builder.endObject();\n builder.flush();\n logger.info(builder.string());", "filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TransportTasksActionTests.java", "status": "modified" }, { "diff": "@@ -19,20 +19,12 @@\n \n package org.elasticsearch.common.network;\n \n-import java.io.IOException;\n-import java.util.Collections;\n-\n-import org.elasticsearch.action.support.replication.ReplicationTask;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.common.Table;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.ModuleTestCase;\n-import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n-import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.BoundTransportAddress;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.http.HttpInfo;\n import org.elasticsearch.http.HttpServerAdapter;\n import org.elasticsearch.http.HttpServerTransport;\n@@ -41,11 +33,12 @@\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.rest.action.cat.AbstractCatAction;\n-import org.elasticsearch.tasks.Task;\n import org.elasticsearch.test.transport.AssertingLocalTransport;\n import org.elasticsearch.transport.Transport;\n import org.elasticsearch.transport.TransportService;\n \n+import java.util.Collections;\n+\n public class NetworkModuleTests extends ModuleTestCase {\n \n static class FakeTransportService extends TransportService {\n@@ -168,40 +161,4 @@ public void testRegisterHttpTransport() {\n assertNotBound(module, HttpServerTransport.class);\n assertFalse(module.isTransportClient());\n }\n-\n- public void testRegisterTaskStatus() {\n- Settings settings = Settings.EMPTY;\n- NetworkModule module = new NetworkModule(new NetworkService(settings, Collections.emptyList()), settings, false);\n- NamedWriteableRegistry registry = new NamedWriteableRegistry(module.getNamedWriteables());\n- assertFalse(module.isTransportClient());\n-\n- // Builtin reader comes back\n- assertNotNull(registry.getReader(Task.Status.class, ReplicationTask.Status.NAME));\n-\n- module.registerTaskStatus(DummyTaskStatus.NAME, DummyTaskStatus::new);\n- assertTrue(module.getNamedWriteables().stream().anyMatch(x -> x.name.equals(DummyTaskStatus.NAME)));\n- }\n-\n- private class DummyTaskStatus implements Task.Status {\n- public static final String NAME = \"dummy\";\n-\n- public DummyTaskStatus(StreamInput in) {\n- throw new UnsupportedOperationException(\"test\");\n- }\n-\n- @Override\n- public String getWriteableName() {\n- return NAME;\n- }\n-\n- @Override\n- public void writeTo(StreamOutput out) throws IOException {\n- throw new UnsupportedOperationException();\n- }\n-\n- @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- throw new UnsupportedOperationException();\n- }\n- }\n }", "filename": "core/src/test/java/org/elasticsearch/common/network/NetworkModuleTests.java", "status": "modified" }, { "diff": "@@ -20,29 +20,23 @@\n package org.elasticsearch.tasks;\n \n import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;\n-import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.test.ESTestCase;\n-import org.hamcrest.Matchers;\n \n-import java.util.Collections;\n+import static java.util.Collections.emptyList;\n+import static java.util.Collections.singletonList;\n \n public class ListTasksResponseTests extends ESTestCase {\n \n- public void testToStringNoTask() {\n- ListTasksResponse tasksResponse = new ListTasksResponse();\n- String string = tasksResponse.toString();\n- assertThat(string, Matchers.containsString(\"nodes\"));\n+ public void testEmptyToString() {\n+ assertEquals(\"{\\\"tasks\\\":{}}\", new ListTasksResponse().toString());\n }\n \n- public void testToString() {\n+ public void testNonEmptyToString() {\n TaskInfo info = new TaskInfo(\n new TaskId(\"node1\", 1), \"dummy-type\", \"dummy-action\", \"dummy-description\", null, 0, 1, true, new TaskId(\"node1\", 0));\n-\n- DiscoveryNodes nodes = DiscoveryNodes.builder().build();\n- ListTasksResponse tasksResponse = new ListTasksResponse(Collections.singletonList(info), Collections.emptyList(),\n- Collections.emptyList(), nodes);\n-\n- String string = tasksResponse.toString();\n- assertThat(string, Matchers.containsString(\"\\\"type\\\":\\\"dummy-type\\\"\"));\n+ ListTasksResponse tasksResponse = new ListTasksResponse(singletonList(info), emptyList(), emptyList());\n+ assertEquals(\"{\\\"tasks\\\":{\\\"node1:1\\\":{\\\"node\\\":\\\"node1\\\",\\\"id\\\":1,\\\"type\\\":\\\"dummy-type\\\",\\\"action\\\":\\\"dummy-action\\\",\"\n+ + \"\\\"description\\\":\\\"dummy-description\\\",\\\"start_time_in_millis\\\":0,\\\"running_time_in_nanos\\\":1,\\\"cancellable\\\":true,\"\n+ + \"\\\"parent_task_id\\\":\\\"node1:0\\\"}}}\", tasksResponse.toString());\n }\n }", "filename": "core/src/test/java/org/elasticsearch/tasks/ListTasksResponseTests.java", "status": "modified" }, { "diff": "@@ -21,11 +21,12 @@\n \n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.ActionResponse;\n-import org.elasticsearch.common.network.NetworkModule;\n-import org.elasticsearch.plugins.ActionPlugin;\n+import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n import org.elasticsearch.common.settings.Setting;\n+import org.elasticsearch.plugins.ActionPlugin;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.rest.RestHandler;\n+import org.elasticsearch.tasks.Task;\n \n import java.util.Arrays;\n import java.util.List;\n@@ -43,16 +44,18 @@ public class ReindexPlugin extends Plugin implements ActionPlugin {\n new ActionHandler<>(RethrottleAction.INSTANCE, TransportRethrottleAction.class));\n }\n \n+ @Override\n+ public List<NamedWriteableRegistry.Entry> getNamedWriteables() {\n+ return singletonList(\n+ new NamedWriteableRegistry.Entry(Task.Status.class, BulkByScrollTask.Status.NAME, BulkByScrollTask.Status::new));\n+ }\n+\n @Override\n public List<Class<? extends RestHandler>> getRestHandlers() {\n return Arrays.asList(RestReindexAction.class, RestUpdateByQueryAction.class, RestDeleteByQueryAction.class,\n RestRethrottleAction.class);\n }\n \n- public void onModule(NetworkModule networkModule) {\n- networkModule.registerTaskStatus(BulkByScrollTask.Status.NAME, BulkByScrollTask.Status::new);\n- }\n-\n @Override\n public List<Setting<?>> getSettings() {\n return singletonList(TransportReindexAction.REMOTE_CLUSTER_WHITELIST);", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexPlugin.java", "status": "modified" }, { "diff": "@@ -19,8 +19,6 @@\n \n package org.elasticsearch.index.reindex;\n \n-import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.inject.Inject;\n@@ -29,11 +27,10 @@\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n-import org.elasticsearch.rest.action.RestToXContentListener;\n import org.elasticsearch.tasks.TaskId;\n \n import static org.elasticsearch.rest.RestRequest.Method.POST;\n-import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.nodeSettingListener;\n+import static org.elasticsearch.rest.action.admin.cluster.RestListTasksAction.listTasksResponseListener;\n \n public class RestRethrottleAction extends BaseRestHandler {\n private final ClusterService clusterService;\n@@ -56,7 +53,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n throw new IllegalArgumentException(\"requests_per_second is a required parameter\");\n }\n internalRequest.setRequestsPerSecond(requestsPerSecond);\n- ActionListener<ListTasksResponse> listener = nodeSettingListener(clusterService, new RestToXContentListener<>(channel));\n- client.execute(RethrottleAction.INSTANCE, internalRequest, listener);\n+ client.execute(RethrottleAction.INSTANCE, internalRequest, listTasksResponseListener(clusterService, channel));\n }\n }", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestRethrottleAction.java", "status": "modified" }, { "diff": "@@ -40,9 +40,8 @@ public class TransportRethrottleAction extends TransportTasksAction<BulkByScroll\n @Inject\n public TransportRethrottleAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, RethrottleAction.NAME, threadPool, clusterService, transportService, actionFilters,\n- indexNameExpressionResolver, RethrottleRequest::new, () -> new ListTasksResponse(clusterService.state().nodes()),\n- ThreadPool.Names.MANAGEMENT);\n+ super(settings, RethrottleAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,\n+ RethrottleRequest::new, ListTasksResponse::new, ThreadPool.Names.MANAGEMENT);\n }\n \n @Override\n@@ -60,7 +59,7 @@ protected TaskInfo readTaskResponse(StreamInput in) throws IOException {\n @Override\n protected ListTasksResponse newResponse(RethrottleRequest request, List<TaskInfo> tasks,\n List<TaskOperationFailure> taskOperationFailures, List<FailedNodeException> failedNodeExceptions) {\n- return new ListTasksResponse(tasks, taskOperationFailures, failedNodeExceptions, clusterService.state().nodes());\n+ return new ListTasksResponse(tasks, taskOperationFailures, failedNodeExceptions);\n }\n \n @Override", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportRethrottleAction.java", "status": "modified" }, { "diff": "@@ -28,14 +28,21 @@\n \n import static org.elasticsearch.test.ESIntegTestCase.Scope.SUITE;\n \n-@ClusterScope(scope = SUITE, transportClientRatio = 0)\n+/**\n+ * Base test case for integration tests against the reindex plugin.\n+ */\n+@ClusterScope(scope = SUITE)\n public abstract class ReindexTestCase extends ESIntegTestCase {\n-\n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n return Arrays.asList(ReindexPlugin.class);\n }\n \n+ @Override\n+ protected Collection<Class<? extends Plugin>> transportClientPlugins() {\n+ return Arrays.asList(ReindexPlugin.class);\n+ }\n+\n protected ReindexRequestBuilder reindex() {\n return ReindexAction.INSTANCE.newRequestBuilder(client());\n }", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexTestCase.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.3.3+, probably any version using ES_RESTART_ON_UPGRADE\n\n**Plugins installed**: [None]\n\n**JVM version**: 1.8.0_72\n\n**OS version**: Oracle Linux 7.2\n\n**Description of the problem including expected versus actual behavior**:\n\nThe environment file /etc/sysconfig/elasticsearch describes a parameter called \"ES_RESTART_ON_UPGRADE\". This file is sourced during the pre-install script of your RPM package. But: The script checks for a parameter called \"RESTART_ON_UPGRADE\" - the \"ES_\"-prefix is missing and the service won't be restarted during the upgrade process.\n\nWe tried copying the whole parameter in /etc/sysconfig/elasticsearch, renamed it to \"RESTART_ON_UPGRADE\" and did a re-install via yum. This time our Elasticsearch node was restarted as expected.\n\n**Steps to reproduce**:\n1. Install Elasticsearch from repository using yum.\n2. Set ES_RESTART_ON_UPGRADE in /etc/sysconfig/elasticsearch to true.\n3. Do a upgrade using using yum.\n4. See that Elasticsearch has not been restarted.\n", "comments": [ { "body": "thanks for reporting @Gacko - I've opened a PR #19976\n", "created_at": "2016-08-12T15:04:48Z" }, { "body": "I wasn't sure if this has to be fixed in the sysconfig or in the install script. I would have opened a PR otherwise. Thank you!\n", "created_at": "2016-08-12T21:05:08Z" } ], "number": 19950, "title": "ES_RESTART_ON_UPGRADE not respected during yum upgrade." }
{ "body": "The sysconfig file has the commented out environment variable `ES_RESTART_ON_UPGRADE`\nbut the install scripts refer to `RESTART_ON_UPGRADE` instead.\n\nThis commit fixes the sysconfig file.\n\nCloses #19950\n", "number": 19976, "review_comments": [], "title": "RESTART_ON_UPGRADE incorrectly named ES_RESTART_ON_UPGRADE in sysconfig" }
{ "commits": [ { "message": "RESTART_ON_UPGRADE incorrectly named ES_RESTART_ON_UPGRADE in sysconfig\n\nCloses #19950" } ], "files": [ { "diff": "@@ -24,7 +24,7 @@\n #ES_JAVA_OPTS=\n \n # Configure restart on package upgrade (true, every other setting will lead to not restarting)\n-#ES_RESTART_ON_UPGRADE=true\n+#RESTART_ON_UPGRADE=true\n \n ################################\n # Elasticsearch service", "filename": "distribution/src/main/packaging/env/elasticsearch", "status": "modified" } ] }
{ "body": "There is a lot of exception catching/masking of ACE by InternalEngine, this is **really bad** and will lead to even crashes of the JVM.\n\nHere is an example of what I mean: https://github.com/elastic/elasticsearch/blob/f6aeb35ce8244f4e60cb827cccb42a359f3a2862/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java#L564-L566\n\n```\n} catch (AlreadyClosedException e) {\n ensureOpen(); // possibly masks ACE by another exception!\n maybeFailEngine(\"refresh\", e); // definitely swallows ACE!\n}\n```\n\nPlease, do not catch this exception! It should only happen in truly exceptional circumstances, such as a bug in the code, and so on. Perhaps it should be treated similar to Error by elasticsearch.\n", "comments": [ { "body": "+1\n", "created_at": "2016-08-08T12:03:51Z" }, { "body": "@mikemccand could you tackle this please?\n", "created_at": "2016-08-12T10:59:45Z" }, { "body": "I think the simplest improvement would be to grep for `AlreadyClosedException` and remove all catch blocks of it everywhere. It may not fix all of the problems, but it will only help.\n\nI am concerned about the current situation but even more concerned about 5.0 which will use mmap more heavily (which is the right thing to do).\n\n@uschindler and I worked all week on improving the safety here in lucene, too to avoid crashes (https://issues.apache.org/jira/browse/LUCENE-7409) which should also make things better.\n", "created_at": "2016-08-12T13:25:38Z" }, { "body": "Keep in mind that AlreadyClosedException extends RuntimeException, so there might be cases where it is swallowed anyways.\n\nIn my personal opinion, we should change AlreadyClosedException to extend IOException in Lucene. Thats a separate issue, but making it extend RuntimeException/IllegalStateException seems bad.\n", "created_at": "2016-08-12T17:25:38Z" }, { "body": "> In my personal opinion, we should change AlreadyClosedException to extend IOException in Lucene. \n\nMaybe open a Lucene issue to discuss this? I'm not sure I agree :)\n\nI think it'd then be all too easy for apps to ignore `AlreadyClosedException` by accident (since nearly every Lucene API throws `IOException`) when in fact it's a serious bug in the application if this exception strikes.\n", "created_at": "2016-08-12T18:59:49Z" }, { "body": "Not only do i consider this a release blocker, but i think it needs to be a lucene-upgrade-blocker.\n\nWe need to stop suppressing the ACE ASAP, and get it into jenkins and work our way towards a failing test, and a fix, all before upgrading lucene.\n\nAs long as we are suppressing ACE, https://issues.apache.org/jira/browse/LUCENE-7409 will only make the situation worse (makes the bug more rare). \n\nI prefer SIGSEGV. Elasticsearch cannot stop it :)\n", "created_at": "2016-08-16T17:59:39Z" }, { "body": "Just to give some context on this issue and why I think it's so urgent:\n- There is a bug lurking somewhere in ES, where it tries to use an already closed Lucene `IndexReader`. We need to find it and fix it asap.\n- Lucene tries hard to detect when users do this, and throw an `AlreadyClosedException`, but this is only best effort, and it's really tricky (see https://issues.apache.org/jira/browse/LUCENE-7409, and even with that improvement I can easily provoke SIGSEGV when running the test case on my dev boxes). When Lucene's best effort check doesn't work, the JVM will likely halt with SIGSEGV when using `MMapDirectory` (the default in ES).\n- Today, ES can swallow this ACE, thus causing our tests to not \"notice\" there is even a problem, making it less evident. This PR is addressing this.\n\nWe really need to take `AlreadyClosedException` seriously...\n", "created_at": "2016-08-16T18:11:41Z" } ], "number": 19861, "title": "InternalEngine should not catch AlreadyClosedException" }
{ "body": "Catching and suppressing `AlreadyClosedException` from Lucene is dangerous because it can mean there is a bug in ES since ES should normally guard against invoking Lucene classes after they were closed.\n\nI reviewed the cases where we catch `AlreadyClosedException` from Lucene and removed the ones that I believe are not needed, or improved comments explaining why ACE is OK in that case.\n\nI think (@s1monw can you confirm?) that holding the engine's `readLock` means IW will not be closed, except if disaster strikes (`failEngine`) at which point I think it's fine to see the original ACE in the logs?\n\nCloses #19861\n", "number": 19975, "review_comments": [ { "body": "I'm not sure I buy the above explanation ;) Lucene's `DirectoryReader` itself already ignores `ACE` when trying to reclaim the pending deleted files. So I'm tempted to remove this catch clause ...\n", "created_at": "2016-08-12T14:21:16Z" }, { "body": "Do we still need to catch this and then immediately rethrow otherwise it will get eaten by the general catch block below?\n", "created_at": "2016-08-12T14:26:52Z" }, { "body": "Same comment here as above: do we still need to catch this and then immediately rethrow otherwise it will get eaten by the general catch block below?\n", "created_at": "2016-08-12T14:27:21Z" }, { "body": "This catch will now swallow an `AlreadyClosedException`!\n", "created_at": "2016-08-12T14:28:08Z" }, { "body": "Same comment here: do we still need to catch this and then immediately rethrow otherwise it will get eaten by the general catch block below?\n", "created_at": "2016-08-12T14:28:43Z" }, { "body": "Also, note that this refresh call can come from, for example, `InternalEngine#writeIndexingBuffer` from `IndexShard#writeIndexingBuffer` which has a general catch block which will swallow the `AlreadyClosedException`!\n", "created_at": "2016-08-12T14:36:35Z" }, { "body": "Shouldn't we still be failing the engine here?\n", "created_at": "2016-08-12T14:45:12Z" }, { "body": "Same question: shouldn't we still be failing the engine here?\n", "created_at": "2016-08-12T14:45:23Z" }, { "body": "I don't think `IW.rollback` throws `AlreadyClosedException`.\n", "created_at": "2016-08-12T14:59:05Z" }, { "body": "Okay!\n", "created_at": "2016-08-12T15:07:13Z" }, { "body": "Does this need to caught and wrapped then?\n", "created_at": "2016-08-12T15:45:26Z" }, { "body": "I'm curious about the removal of the `ensureOpen` calls; can you explain?\n", "created_at": "2016-08-12T15:46:58Z" }, { "body": "Well, where `IndexShard` calls this method, it expects/handles the `AlreadyClosedException` (`handleRefreshException` in `IndexShard.java`), and an ACE thrown from here can happen under normal usage (does not necessarily mean there's a bug).\n\n`super`'s javadocs also state that it can throw `AlreadyClosedException`.\n\nWe could alternatively acquire the `readLock`, in this method (and re-throw to `AssertionError`), but that's somewhat scary since `writeLock` can be held for quite some time, blocking `IndexingMemoryController` from polling the shards when one engine is flushing.\n", "created_at": "2016-08-12T16:12:20Z" }, { "body": "`ensureOpen()` can throw its own exception, masking the (unexpected) `AlreadyClosedException` we just hit. Also, except for disaster (failing engine), we should never have hit an `AlreadyClosedException` because the engine was closed in that try block since we hold the `readLock`. I think?\n", "created_at": "2016-08-12T16:13:58Z" }, { "body": "why?\n\nPlease, let it halt the jvm. the jvm will halt in a much nastier way if we do not resolve the bugs around this :)\n", "created_at": "2016-08-12T16:24:10Z" }, { "body": "Ähm, it could?\n", "created_at": "2016-08-12T17:22:30Z" }, { "body": "OK I'll revert the changes here. It is true that at defaults (`MMapDirectory`) we are already playing with fire on hitting an `AlreadyClosedException`.\n", "created_at": "2016-08-12T18:51:10Z" }, { "body": "I didn't think it would? It's not supposed to? If it does I think it's a Lucene bug? We have this comment in `IndexWriter.rollback`:\n\n```\n // don't call ensureOpen here: this acts like \"close()\" in closeable.\n```\n", "created_at": "2016-08-12T18:52:33Z" }, { "body": "I'm not sure I agree, do we really need to be potentially failing or removing from the in-sync replica set all the _other_ shards on this node in this situation?\n", "created_at": "2016-08-12T18:53:50Z" }, { "body": "dude. its going to crash the jvm if we do not find the cause of this bug! \n", "created_at": "2016-08-12T19:09:25Z" }, { "body": "this is the code that I mean, is it ok to concurrently close SM and call SM#maybeRefreshBlocking? if so we can ignore the ACE if ensureOpen throws an exception so I thin the code is good?\n", "created_at": "2016-08-16T20:00:33Z" }, { "body": "if we hit a merge exception in multiple threads we can call this multiple times? Why is it not ok to swallow this? do we need more assertions here or commments?\n", "created_at": "2016-08-16T20:01:39Z" }, { "body": "this one in the shadow engine shouldn't happen indeed\n", "created_at": "2016-08-16T20:02:20Z" } ], "title": "Don't suppress AlreadyClosedException" }
{ "commits": [ { "message": "don't suppress AlreadyClosedException: it means there's a bug somewhere" }, { "message": "Wrap unexpected AlreadyCloseException under AssertionError to prevent catch clauses from suppressing it" }, { "message": "Add comment explaining why we special case AssertionError(AlreadyClosedException))" }, { "message": "halt the JVM if an unexpected AlreadyClosedException strikes" }, { "message": "Merge branch 'master' into dont_catch_ace" }, { "message": "enforce that ACE is only handled in a tragic event" }, { "message": "special case forceMerge - here we can catch ACE safely if we are already closed" }, { "message": "add more comments to clarify what to do with ACE" } ], "files": [ { "diff": "@@ -591,7 +591,7 @@ public final boolean refreshNeeded() {\n the store is closed so we need to make sure we increment it here\n */\n try {\n- return !getSearcherManager().isSearcherCurrent();\n+ return getSearcherManager().isSearcherCurrent() == false;\n } catch (IOException e) {\n logger.error(\"failed to access searcher manager\", e);\n failEngine(\"failed to access searcher manager\", e);", "filename": "core/src/main/java/org/elasticsearch/index/engine/Engine.java", "status": "modified" }, { "diff": "@@ -59,9 +59,8 @@ public void close() {\n } catch (IOException e) {\n throw new IllegalStateException(\"Cannot close\", e);\n } catch (AlreadyClosedException e) {\n- /* this one can happen if we already closed the\n- * underlying store / directory and we call into the\n- * IndexWriter to free up pending files. */\n+ // This means there's a bug somewhere: don't suppress it\n+ throw new AssertionError(e);\n } finally {\n store.decRef();\n }", "filename": "core/src/main/java/org/elasticsearch/index/engine/EngineSearcher.java", "status": "modified" }, { "diff": "@@ -562,8 +562,8 @@ public void refresh(String source) throws EngineException {\n ensureOpen();\n searcherManager.maybeRefreshBlocking();\n } catch (AlreadyClosedException e) {\n- ensureOpen();\n- maybeFailEngine(\"refresh\", e);\n+ failOnTragicEvent(e);\n+ throw e;\n } catch (EngineClosedException e) {\n throw e;\n } catch (Exception e) {\n@@ -610,8 +610,8 @@ public void writeIndexingBuffer() throws EngineException {\n indexWriter.flush();\n }\n } catch (AlreadyClosedException e) {\n- ensureOpen();\n- maybeFailEngine(\"writeIndexingBuffer\", e);\n+ failOnTragicEvent(e);\n+ throw e;\n } catch (EngineClosedException e) {\n throw e;\n } catch (Exception e) {\n@@ -835,6 +835,14 @@ public void forceMerge(final boolean flush, int maxNumSegments, boolean onlyExpu\n } finally {\n store.decRef();\n }\n+ } catch (AlreadyClosedException ex) {\n+ /* in this case we first check if the engine is still open. If so this exception is just fine\n+ * and expected. We don't hold any locks while we block on forceMerge otherwise it would block\n+ * closing the engine as well. If we are not closed we pass it on to failOnTragicEvent which ensures\n+ * we are handling a tragic even exception here */\n+ ensureOpen();\n+ failOnTragicEvent(ex);\n+ throw ex;\n } catch (Exception e) {\n try {\n maybeFailEngine(\"force merge\", e);\n@@ -869,26 +877,35 @@ public IndexCommit acquireIndexCommit(final boolean flushFirst) throws EngineExc\n }\n }\n \n+ private void failOnTragicEvent(AlreadyClosedException ex) {\n+ // if we are already closed due to some tragic exception\n+ // we need to fail the engine. it might have already been failed before\n+ // but we are double-checking it's failed and closed\n+ if (indexWriter.isOpen() == false && indexWriter.getTragicException() != null) {\n+ final Exception tragedy = indexWriter.getTragicException() instanceof Exception ?\n+ (Exception) indexWriter.getTragicException() :\n+ new Exception(indexWriter.getTragicException());\n+ failEngine(\"already closed by tragic event on the index writer\", tragedy);\n+ } else if (translog.isOpen() == false && translog.getTragicException() != null) {\n+ failEngine(\"already closed by tragic event on the translog\", translog.getTragicException());\n+ } else {\n+ // this smells like a bug - we only expect ACE if we are in a fatal case ie. either translog or IW is closed by\n+ // a tragic event or has closed itself. if that is not the case we are in a buggy state and raise an assertion error\n+ throw new AssertionError(\"Unexpected AlreadyClosedException\", ex);\n+ }\n+ }\n+\n @Override\n protected boolean maybeFailEngine(String source, Exception e) {\n boolean shouldFail = super.maybeFailEngine(source, e);\n if (shouldFail) {\n return true;\n }\n-\n- // Check for AlreadyClosedException\n+ // Check for AlreadyClosedException -- ACE is a very special\n+ // exception that should only be thrown in a tragic event. we pass on the checks to failOnTragicEvent which will\n+ // throw and AssertionError if the tragic event condition is not met.\n if (e instanceof AlreadyClosedException) {\n- // if we are already closed due to some tragic exception\n- // we need to fail the engine. it might have already been failed before\n- // but we are double-checking it's failed and closed\n- if (indexWriter.isOpen() == false && indexWriter.getTragicException() != null) {\n- final Exception tragedy = indexWriter.getTragicException() instanceof Exception ?\n- (Exception) indexWriter.getTragicException() :\n- new Exception(indexWriter.getTragicException());\n- failEngine(\"already closed by tragic event on the index writer\", tragedy);\n- } else if (translog.isOpen() == false && translog.getTragicException() != null) {\n- failEngine(\"already closed by tragic event on the translog\", translog.getTragicException());\n- }\n+ failOnTragicEvent((AlreadyClosedException)e);\n return true;\n } else if (e != null &&\n ((indexWriter.isOpen() == false && indexWriter.getTragicException() == e)\n@@ -914,6 +931,7 @@ protected final void writerSegmentStats(SegmentsStats stats) {\n \n @Override\n public long getIndexBufferRAMBytesUsed() {\n+ // We don't guard w/ readLock here, so we could throw AlreadyClosedException\n return indexWriter.ramBytesUsed() + versionMap.ramBytesUsedForRefresh();\n }\n \n@@ -963,8 +981,9 @@ protected final void closeNoLock(String reason) {\n logger.trace(\"rollback indexWriter\");\n try {\n indexWriter.rollback();\n- } catch (AlreadyClosedException e) {\n- // ignore\n+ } catch (AlreadyClosedException ex) {\n+ failOnTragicEvent(ex);\n+ throw ex;\n }\n logger.trace(\"rollback indexWriter done\");\n } catch (Exception e) {", "filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java", "status": "modified" }, { "diff": "@@ -191,7 +191,8 @@ public void refresh(String source) throws EngineException {\n ensureOpen();\n searcherManager.maybeRefreshBlocking();\n } catch (AlreadyClosedException e) {\n- ensureOpen();\n+ // This means there's a bug somewhere: don't suppress it\n+ throw new AssertionError(e);\n } catch (EngineClosedException e) {\n throw e;\n } catch (Exception e) {", "filename": "core/src/main/java/org/elasticsearch/index/engine/ShadowEngine.java", "status": "modified" } ] }
{ "body": "Tested 1.7 vs 2.1. The following example:\n\n```\nPOST test/doc/1\n{\n \"text\": \"Score and explanation should match I think?\"\n}\n\nPOST test/doc/2\n{\n \"text\": \"Score and explanation did match in 1.7...\"\n}\n\n\nGET test/_search?search_type=dfs_query_then_fetch&explain\n{\n \"query\": {\n \"match\": {\n \"text\": \"score\"\n }\n }\n}\n```\n\nfor 1.7.0 returns:\n\n``````\n{\n \"took\": 48,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 2,\n \"max_score\": 0.22295055,\n \"hits\": [\n {\n \"_shard\": 2,\n \"_node\": \"5IoTIXdmROKObfLRmqCQYQ\",\n \"_index\": \"test\",\n \"_type\": \"doc\",\n \"_id\": \"1\",\n \"_score\": 0.22295055,\n \"_source\": {\n \"text\": \"Score and explanation should match I think?\"\n },\n \"_explanation\": {\n \"value\": 0.22295056,\n \"description\": \"weight(text:score in 0) [PerFieldSimilarity], result of:\",\n \"details\": [\n {\n \"value\": 0.22295056,\n \"description\": \"score(doc=0,freq=1.0), product of:\",\n \"details\": ```\n {\n \"_shard\": 2,\n \"_node\": \"8AbQaexnTwKbvcL1Va4yfw\",\n \"_index\": \"test\",\n \"_type\": \"doc\",\n \"_id\": \"1\",\n \"_score\": 0.22295055,\n \"_source\": {\n \"text\": \"Score and explanation should match I think?\"\n },\n \"_explanation\": {\n \"value\": 0.22295056,\n \"description\": \"weight(text:score in 0) [PerFieldSimilarity], result of:\",[\n {\n \"value\": 0.99999994,\n \"description\": \"queryWeight, product of:\",\n \"details\": [\n {\n \"value\": 0.5945349,\n \"description\": \"idf(docFreq=2, maxDocs=2)\"\n },\n {\n \"value\": 1.681987,\n \"description\": \"queryNorm\"\n }\n ]\n },\n {\n \"value\": 0.22295058,\n \"description\": \"fieldWeight in 0, product of:\",\n \"details\": [\n {\n \"value\": 1,\n \"description\": \"tf(freq=1.0), with freq of:\",\n \"details\": [\n {\n \"value\": 1,\n \"description\": \"termFreq=1.0\"\n }\n ]\n },\n {\n \"value\": 0.5945349,\n \"description\": \"idf(docFreq=2, maxDocs=2)\"\n },\n {\n \"value\": 0.375,\n \"description\": \"fieldNorm(doc=0)\"\n }\n ]\n }\n ]\n }\n ]\n }\n },\n {\n \"_shard\": 3,\n \"_node\": \"5IoTIXdmROKObfLRmqCQYQ\",\n \"_index\": \"test\",\n \"_type\": \"doc\",\n \"_id\": \"2\",\n \"_score\": 0.22295055,\n \"_source\": {\n \"text\": \"Score and explanation did match in 1.7...\"\n },\n \"_explanation\": {\n \"value\": 0.22295056,\n \"description\": \"weight(text:score in 0) [PerFieldSimilarity], result of:\",\n \"details\": [\n {\n \"value\": 0.22295056,\n \"description\": \"score(doc=0,freq=1.0), product of:\",\n \"details\": [\n {\n \"value\": 0.99999994,\n \"description\": \"queryWeight, product of:\",\n \"details\": [\n {\n \"value\": 0.5945349,\n \"description\": \"idf(docFreq=2, maxDocs=2)\"\n },\n {\n \"value\": 1.681987,\n \"description\": \"queryNorm\"\n }\n ]\n },\n {\n \"value\": 0.22295058,\n \"description\": \"fieldWeight in 0, product of:\",\n \"details\": [\n {\n \"value\": 1,\n \"description\": \"tf(freq=1.0), with freq of:\",\n \"details\": [\n {\n \"value\": 1,\n \"description\": \"termFreq=1.0\"\n }\n ]\n },\n {\n \"value\": 0.5945349,\n \"description\": \"idf(docFreq=2, maxDocs=2)\"\n },\n {\n \"value\": 0.375,\n \"description\": \"fieldNorm(doc=0)\"\n }\n ]\n }\n ]\n }\n ]\n }\n }\n ]\n }\n}\n\n``````\n\nand in 2.1:\n\n```\n{\n \"took\": 46,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 2,\n \"max_score\": 0.22295055,\n \"hits\": [\n {\n \"_shard\": 2,\n \"_node\": \"hcC0CtAfTSSILbEq32T_fw\",\n \"_index\": \"test\",\n \"_type\": \"doc\",\n \"_id\": \"2\",\n \"_score\": 0.22295055,\n \"_source\": {\n \"text\": \"Score and explanation did match in 1.7...\"\n },\n \"_explanation\": {\n \"value\": 0.11506981,\n \"description\": \"weight(text:score in 0) [PerFieldSimilarity], result of:\",\n \"details\": [\n {\n \"value\": 0.11506981,\n \"description\": \"fieldWeight in 0, product of:\",\n \"details\": [\n {\n \"value\": 1,\n \"description\": \"tf(freq=1.0), with freq of:\",\n \"details\": [\n {\n \"value\": 1,\n \"description\": \"termFreq=1.0\",\n \"details\": []\n }\n ]\n },\n {\n \"value\": 0.30685282,\n \"description\": \"idf(docFreq=1, maxDocs=1)\",\n \"details\": []\n },\n {\n \"value\": 0.375,\n \"description\": \"fieldNorm(doc=0)\",\n \"details\": []\n }\n ]\n }\n ]\n }\n },\n {\n \"_shard\": 3,\n \"_node\": \"hcC0CtAfTSSILbEq32T_fw\",\n \"_index\": \"test\",\n \"_type\": \"doc\",\n \"_id\": \"1\",\n \"_score\": 0.22295055,\n \"_source\": {\n \"text\": \"Score and explanation should match I think?\"\n },\n \"_explanation\": {\n \"value\": 0.11506981,\n \"description\": \"weight(text:score in 0) [PerFieldSimilarity], result of:\",\n \"details\": [\n {\n \"value\": 0.11506981,\n \"description\": \"fieldWeight in 0, product of:\",\n \"details\": [\n {\n \"value\": 1,\n \"description\": \"tf(freq=1.0), with freq of:\",\n \"details\": [\n {\n \"value\": 1,\n \"description\": \"termFreq=1.0\",\n \"details\": []\n }\n ]\n },\n {\n \"value\": 0.30685282,\n \"description\": \"idf(docFreq=1, maxDocs=1)\",\n \"details\": []\n },\n {\n \"value\": 0.375,\n \"description\": \"fieldNorm(doc=0)\",\n \"details\": []\n }\n ]\n }\n ]\n }\n }\n ]\n }\n}\n\n```\n", "comments": [ { "body": "@brwe Can you explain what \"doesn't match\"? The explanations and scores look the same to me, but maybe I'm just not seeing it in the page-of-json. :)\n", "created_at": "2015-12-10T19:21:21Z" }, { "body": "@rjernst I wondered the same but I think I see it now: in the 2.1 output `_score` is different from `_explanation.value`.\n", "created_at": "2015-12-10T21:40:57Z" }, { "body": "Related to https://github.com/elastic/elasticsearch/issues/2612?\n", "created_at": "2015-12-11T09:30:34Z" }, { "body": "I have the same issue. Any clue on how to fix or a workaround?\n", "created_at": "2016-01-07T23:39:33Z" }, { "body": "I can confirm that this still happens in `2.3`. \n\nLooks like when using dfs, the explain ignores the fact that we need to merge the results from the information of the shards. The explain value when using `dfs` is the same score that you get if you do not use `dfs`, but instead just query then fetch.\n", "created_at": "2016-05-06T13:37:21Z" }, { "body": "```\n \"_score\": 0.2202036, <---------- dfs score\n \"_source\": {\n \"city\": \"montevideo\",\n \"cityAliases\": \"mvd\"\n },\n \"_explanation\": {\n \"value\": 0.2417773, <----------- the score for non-dfg\n \"description\": \"max of:\",\n \"details\": [\n {\n \"value\": 0.2417773,\n```\n", "created_at": "2016-05-06T14:04:01Z" }, { "body": "Are there any test cases that verifies that the explanation root value is equals to the hit score? \n", "created_at": "2016-05-06T14:09:49Z" } ], "number": 15369, "title": "score and explanation for dfs queries don't match " }
{ "body": "ContextIndexSearcher#explain ignores the dfs data to create the normalized weight.\nThis change fixes this discrepancy by using the dfs data to create the normalized weight when needed.\n\nFixes #15369\n", "number": 19972, "review_comments": [ { "body": "this last line is redundant I think, we already check the same above\n", "created_at": "2016-08-12T08:39:47Z" }, { "body": "should this be moved up?\n", "created_at": "2016-08-12T08:40:06Z" }, { "body": "should this be moved up?\n", "created_at": "2016-08-12T08:43:29Z" }, { "body": "should we add the assertions to other methods in this class that use dfs?\n", "created_at": "2016-08-12T08:43:56Z" }, { "body": "ignore this, sorry for the noise :)\n", "created_at": "2016-08-12T08:44:44Z" }, { "body": ";)\n", "created_at": "2016-08-12T08:47:09Z" } ], "title": "Fix explain output for dfs query" }
{ "commits": [ { "message": "Fix explain output for dfs query\n\nContextIndexSearcher#explain ignores the dfs data to create the normalized weight.\nThis change fixes this discrepancy by using the dfs data to create the normalized weight when needed." } ], "files": [ { "diff": "@@ -133,6 +133,10 @@ public Weight createWeight(Query query, boolean needsScores) throws IOException\n \n @Override\n public Explanation explain(Query query, int doc) throws IOException {\n+ if (aggregatedDfs != null) {\n+ // dfs data is needed to explain the score\n+ return super.explain(createNormalizedWeight(query, true), doc);\n+ }\n return in.explain(query, doc);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java", "status": "modified" }, { "diff": "@@ -60,6 +60,7 @@\n import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.search.builder.SearchSourceBuilder.searchSource;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n+import static org.hamcrest.Matchers.endsWith;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.lessThanOrEqualTo;\n@@ -147,6 +148,10 @@ public void testDfsQueryThenFetch() throws Exception {\n for (int i = 0; i < hits.length; ++i) {\n SearchHit hit = hits[i];\n assertThat(hit.explanation(), notNullValue());\n+ assertThat(hit.explanation().getDetails().length, equalTo(1));\n+ assertThat(hit.explanation().getDetails()[0].getDetails().length, equalTo(2));\n+ assertThat(hit.explanation().getDetails()[0].getDetails()[0].getDescription(),\n+ endsWith(\"idf(docFreq=100, docCount=100)\"));\n assertThat(\"id[\" + hit.id() + \"] -> \" + hit.explanation().toString(), hit.id(), equalTo(Integer.toString(100 - total - i - 1)));\n }\n total += hits.length;\n@@ -171,6 +176,10 @@ public void testDfsQueryThenFetchWithSort() throws Exception {\n for (int i = 0; i < hits.length; ++i) {\n SearchHit hit = hits[i];\n assertThat(hit.explanation(), notNullValue());\n+ assertThat(hit.explanation().getDetails().length, equalTo(1));\n+ assertThat(hit.explanation().getDetails()[0].getDetails().length, equalTo(2));\n+ assertThat(hit.explanation().getDetails()[0].getDetails()[0].getDescription(),\n+ endsWith(\"idf(docFreq=100, docCount=100)\"));\n assertThat(\"id[\" + hit.id() + \"]\", hit.id(), equalTo(Integer.toString(total + i)));\n }\n total += hits.length;\n@@ -317,6 +326,10 @@ public void testDfsQueryAndFetch() throws Exception {\n SearchHit hit = searchResponse.getHits().hits()[i];\n // System.out.println(hit.shard() + \": \" + hit.explanation());\n assertThat(hit.explanation(), notNullValue());\n+ assertThat(hit.explanation().getDetails().length, equalTo(1));\n+ assertThat(hit.explanation().getDetails()[0].getDetails().length, equalTo(2));\n+ assertThat(hit.explanation().getDetails()[0].getDetails()[0].getDescription(),\n+ endsWith(\"idf(docFreq=100, docCount=100)\"));\n // assertThat(\"id[\" + hit.id() + \"]\", hit.id(), equalTo(Integer.toString(100 - i - 1)));\n assertThat(\"make sure we don't have duplicates\", expectedIds.remove(hit.id()), notNullValue());\n }", "filename": "core/src/test/java/org/elasticsearch/search/basic/TransportTwoNodesSearchIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**:\n5.0.0-alpha4\n\n**JVM version**:\nOracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_101/25.101-b13\n\n**OS version**:\nLinux 107r01pc 4.2.0-42-generic #49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux\n\n**Description of the problem including expected versus actual behavior**:\nIn a groovy scripted update asserting while referencing variables and specifying a message will cause a class not found exception.\n\n**Steps to reproduce**:\n1. on a clean elasticsearch enable scripting\n2. run some curl commands, see below\n3. observe the results\n\n**Various commands**:\n1. `curl -XPOST 'http://localhost:9200/a/a/a/_update' -d '{\"scripted_upsert\": true, \"script\": {\"inline\": \"assert false, \\\"foo\\\";\"}, \"upsert\": {}}'`\n - `{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[Vertigo][127.0.0.1:9300][indices:data/write/update[s]]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"failed to execute script\",\"caused_by\":{\"type\":\"script_exception\",\"reason\":\"Error evaluating assert false, \\\"foo\\\";\",\"caused_by\":{\"type\":\"assertion_error\",\"reason\":\"foo. Expression: false\"},\"script_stack\":[],\"script\":\"\",\"lang\":\"groovy\"}},\"status\":400}`\n2. `curl -XPOST 'http://localhost:9200/a/a/b/_update' -d '{\"scripted_upsert\": true, \"script\": {\"inline\": \"def bar=false; assert bar, \\\"foo\\\";\"}, \"upsert\": {}}'`\n - `{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[Vertigo][127.0.0.1:9300][indices:data/write/update[s]]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"failed to execute script\",\"caused_by\":{\"type\":\"script_exception\",\"reason\":\"Error evaluating def bar=false; assert bar, \\\"foo\\\";\",\"caused_by\":{\"type\":\"no_class_def_found_error\",\"reason\":\"java/lang/StringBuffer\"},\"script_stack\":[],\"script\":\"\",\"lang\":\"groovy\"}},\"status\":400}`\n3. `curl -XPOST 'http://localhost:9200/a/a/b/_update' -d '{\"scripted_upsert\": true, \"script\": {\"inline\": \"def bar=false; assert bar;\"}, \"upsert\": {}}'`\n - `{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[Vertigo][127.0.0.1:9300][indices:data/write/update[s]]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"failed to execute script\",\"caused_by\":{\"type\":\"script_exception\",\"reason\":\"Error evaluating def bar=false; assert bar;\",\"caused_by\":{\"type\":\"power_assertion_error\",\"reason\":\"assert bar\\n |\\n false\"},\"script_stack\":[],\"script\":\"\",\"lang\":\"groovy\"}},\"status\":400}`\n\nSomewhat more clearly:\n1. using `assert false, \"message\"` works as expected\n2. using `assert var, \"message\"` throws a class not found exception somewhere down the stack\n3. using `assert var` works as expected\n\nI've tried this on a more recent snapshot (34bb1508637368c43b792992646a612bb8022e99) and the assert shows up in the log, but curl never prints anything nor exists, as if no response is generated.\nThis is the log:\n\n```\n[2016-08-04 18:07:53,886][ERROR][bootstrap ] [] fatal error in thread [elasticsearch[0ZFv9t-][index][T#1]], exiting\njava.lang.AssertionError: foo. Expression: false\n at org.codehaus.groovy.runtime.InvokerHelper.assertFailed(InvokerHelper.java:404)\n at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.assertFailed(ScriptBytecodeAdapter.java:650)\n at 20a61a940854553b62db2cfb00cb1511e91aac90.run(20a61a940854553b62db2cfb00cb1511e91aac90:1)\n at java.security.AccessController.doPrivileged(Native Method)\n at org.elasticsearch.script.groovy.GroovyScriptEngineService$GroovyScript.run(GroovyScriptEngineService.java:295)\n at org.elasticsearch.action.update.UpdateHelper.executeScript(UpdateHelper.java:253)\n at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:103)\n at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:80)\n at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:179)\n at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:172)\n at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:69)\n at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$ShardTransportHandler.messageReceived(TransportInstanceSingleOperationAction.java:247)\n at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$ShardTransportHandler.messageReceived(TransportInstanceSingleOperationAction.java:243)\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)\n at org.elasticsearch.transport.TransportService$5.doRun(TransportService.java:517)\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:510)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n\nException: java.security.AccessControlException thrown from the UncaughtExceptionHandler in thread \"elasticsearch[0ZFv9t-][index][T#1]\"\n```\n", "comments": [ { "body": "@SweetNSourPavement I was able to reproduce this, this looks pretty serious, it causes a thread to essentially hang forever on master. I'll work on a fix.\n", "created_at": "2016-08-08T21:47:40Z" } ], "number": 19806, "title": "Groovy assert misses StringBuffer" }
{ "body": "Previously, we only caught subclasses of Exception, however, there are\nsome cases when Errors are thrown instead of Exceptions. These two cases\nare `assert` and when a class cannot be found.\n\nWithout this change, the exception would bubble up to the\n`uncaughtExceptionHandler`, which would in turn, exit the JVM\n(related: #19923).\n\nA note of difference between regular Java asserts and Groovy asserts,\nfrom http://docs.groovy-lang.org/docs/latest/html/documentation/core-testing-guide.html\n\n\"Another important difference from Java is that in Groovy assertions are\nenabled by default. It has been a language design decision to remove the\npossibility to deactivate assertions.\"\n\nIn the event that a user uses an assert such as:\n\n``` groovy\ndef bar=false; assert bar, \"message\";\n```\n\nThe GroovyScriptEngineService throws a NoClassDefFoundError being unable\nto find the `java.lang.StringBuffer` class. It is _highly_ recommended\nthat any Groovy scripting user switch to using regular exceptions rather\nthan unconfigurable Groovy assertions.\n\nResolves #19806\n", "number": 19958, "review_comments": [ { "body": "I think we should inspect the `AssertionError` and make sure that it comes from Groovy (by inspecting the stack trace and looking for `assertFailed` and other relevant Groovy calls).\n", "created_at": "2016-08-11T16:16:40Z" }, { "body": "I guess this is because we are not granting the class permission for `StringBuffer` but I think we should?\n", "created_at": "2016-08-11T16:49:35Z" }, { "body": "Sure, I added the permissions for that (StringBuffer)\n", "created_at": "2016-08-11T19:43:16Z" }, { "body": "Okay, I added a check for this, and a comment explaining why\n", "created_at": "2016-08-11T19:43:39Z" }, { "body": "Sure, but that was before we knew that we would need to add a permission for `InvokeHelper` too? So maybe we should just not add it since it will not help with getting asserts to work? Sorry.\n", "created_at": "2016-08-11T19:46:05Z" }, { "body": "For extra super paranoia's sake can you add a test that throws NoClassDefFoundError so we have something independent that checks that user scripts that do that don't halt the jvm?\n", "created_at": "2016-08-11T19:47:23Z" }, { "body": "In order to do that, we have to add permissions for the `java.lang.NoClassDefFoundError` in ClassPermission.java. Are we sure we want to add permissions for that class only to add a test case?\n", "created_at": "2016-08-11T19:52:55Z" }, { "body": "Why are we traversing the stack trace? Shouldn't there be a Groovy method for the assert right at the top of the stack trace?\n", "created_at": "2016-08-11T19:53:41Z" }, { "body": "Makes sense to me.\n", "created_at": "2016-08-11T19:54:43Z" }, { "body": "I didn't see anything other than this to indicate that it came from Groovy's version of `assert` rather than Java's: https://gist.github.com/dakrone/89c712ec192cf770f3ffff9a9f372035\n\nI think Groovy is doing \"magic\" here and it makes it harder to identify\n", "created_at": "2016-08-11T20:03:17Z" }, { "body": "You see InvokerHelper at the very top, that is the bad guy that throws it.\n", "created_at": "2016-08-11T20:13:38Z" }, { "body": "Right, but the point is that the `InvokeHelper` is right at the top of the stack trace. I do not think we should be descending in case the top of the stack trace is from an assert failing elsewhere outside of Groovy.\n", "created_at": "2016-08-11T20:17:01Z" }, { "body": "Alright,\n\nI'll change to to check `ae.getStackTrace()[0]` for `InvokeHelper`, hopefully this is not fragile in any cases where there would be other classes in the stack due to function calls within the groovy script\n", "created_at": "2016-08-11T21:29:33Z" }, { "body": "Are we going to leave this on since it doesn't get us anything here for this issue?\n", "created_at": "2016-08-11T23:07:59Z" }, { "body": "It makes the error message when someone tries to use Groovy assertions point a bit more to Groovy instead of us (ie, NoClassDefFoundError for InvokerHelper rather than for StringBuffer), so I'm in favor of keeping it. If there is some security issue with it though, we can definitely leave it out. I'm fine either way.\n", "created_at": "2016-08-12T14:39:44Z" } ], "title": "Catch and wrap AssertionError and NoClassDefFoundError in groovy scripts" }
{ "commits": [ { "message": "Catch AssertionError and NoClassDefFoundError in groovy scripts\n\nPreviously, we only caught subclasses of Exception, however, there are\nsome cases when Errors are thrown instead of Exceptions. These two cases\nare `assert` and when a class cannot be found.\n\nWithout this change, the exception would bubble up to the\n`uncaughtExceptionHandler`, which would in turn, exit the JVM\n(related: #19923).\n\nA note of difference between regular Java asserts and Groovy asserts,\nfrom http://docs.groovy-lang.org/docs/latest/html/documentation/core-testing-guide.html\n\n\"Another important difference from Java is that in Groovy assertions are\nenabled by default. It has been a language design decision to remove the\npossibility to deactivate assertions.\"\n\nIn the event that a user uses an assert such as:\n\n```groovy\ndef bar=false; assert bar, \"message\";\n```\n\nThe GroovyScriptEngineService throws a NoClassDefFoundError being unable\nto find the `java.lang.StringBuffer` class. It is *highly* recommended\nthat any Groovy scripting user switch to using regular exceptions rather\nthan unconfiguration Groovy assertions.\n\nResolves #19806" } ], "files": [ { "diff": "@@ -293,10 +293,19 @@ public Object run() {\n // NOTE: we truncate the stack because IndyInterface has security issue (needs getClassLoader)\n // we don't do a security check just as a tradeoff, it cannot really escalate to anything.\n return AccessController.doPrivileged((PrivilegedAction<Object>) script::run);\n- } catch (Exception e) {\n- if (logger.isTraceEnabled()) {\n- logger.trace(\"failed to run {}\", e, compiledScript);\n+ } catch (AssertionError ae) {\n+ // Groovy asserts are not java asserts, and cannot be disabled, so we do a best-effort trying to determine if this is a\n+ // Groovy assert (in which case we wrap it and throw), or a real Java assert, in which case we rethrow it as-is, likely\n+ // resulting in the uncaughtExceptionHandler handling it.\n+ final StackTraceElement[] elements = ae.getStackTrace();\n+ if (elements.length > 0 && \"org.codehaus.groovy.runtime.InvokerHelper\".equals(elements[0].getClassName())) {\n+ logger.trace(\"failed to run {}\", ae, compiledScript);\n+ throw new ScriptException(\"Error evaluating \" + compiledScript.name(),\n+ ae, emptyList(), \"\", compiledScript.lang());\n }\n+ throw ae;\n+ } catch (Exception | NoClassDefFoundError e) {\n+ logger.trace(\"failed to run {}\", e, compiledScript);\n throw new ScriptException(\"Error evaluating \" + compiledScript.name(), e, emptyList(), \"\", compiledScript.lang());\n }\n }", "filename": "modules/lang-groovy/src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java", "status": "modified" }, { "diff": "@@ -123,6 +123,13 @@ public void testEvilGroovyScripts() throws Exception {\n }\n }\n \n+ public void testGroovyScriptsThatThrowErrors() throws Exception {\n+ assertFailure(\"assert false, \\\"msg\\\";\", AssertionError.class);\n+ assertFailure(\"def foo=false; assert foo;\", AssertionError.class);\n+ // Groovy's asserts require org.codehaus.groovy.runtime.InvokerHelper, so they are denied\n+ assertFailure(\"def foo=false; assert foo, \\\"msg2\\\";\", NoClassDefFoundError.class);\n+ }\n+\n /** runs a script */\n private void doTest(String script) {\n Map<String, Object> vars = new HashMap<String, Object>();\n@@ -146,7 +153,7 @@ private void assertSuccess(String script) {\n doTest(script);\n }\n \n- /** asserts that a script triggers securityexception */\n+ /** asserts that a script triggers the given exceptionclass */\n private void assertFailure(String script, Class<? extends Throwable> exceptionClass) {\n try {\n doTest(script);", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/script/groovy/GroovySecurityTests.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue. Note that whether you're filing a bug report or a\nfeature request, ensure that your submission is for an\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\nBug reports on an OS that we do not support or feature requests\nspecific to an OS that we do not support will be closed.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 5.0-alpha5\n\n**Plugins installed**: []\n\n**JVM version**: 1.8.0_92\n\n**OS version**: Windows Server 2008 R2\n\n**Description of the problem including expected versus actual behavior**:\nExecution of the \"service install\" command fails due to folder name containing spaces (see screenshot)\n![cattura](https://cloud.githubusercontent.com/assets/6169392/17583831/4fd0040e-5fb3-11e6-9a6c-4d5bbfadd794.PNG). The same command / configuration works fine with ES 2.3.x\n\n**Steps to reproduce**:\n1. Install ES 5.0-alpha5 into a folder like C:\\Program Files\\elasticsearch\n2. As an admin, go to to bin/ and try to execute \"service install myesservicename\"\n", "comments": [ { "body": "I can reproduce this issue, thank you for reporting @alextxm. I have marked you as eligible for the [Pioneer Program](https://www.elastic.co/blog/elastic-pioneer-program).\n", "created_at": "2016-08-11T12:35:58Z" }, { "body": "I opened #19951.\n", "created_at": "2016-08-11T13:52:53Z" } ], "number": 19941, "title": "ES 5.0-alpha5 on windows fails to install windows service" }
{ "body": "This commit fixes the handling of spaces in the path to the jvm.options\nfile on Windows. The issue is that the extraneous set of quotes were\nincluded as part of the value of ES_JVM_OPTIONS thus confusing further\ndownstream commands.\n\nCloses #19941\n", "number": 19951, "review_comments": [], "title": "Fix handling of spaces for jvm.options on Windows" }
{ "commits": [ { "message": "Fix handling of spaces for jvm.options on Windows\n\nThis commit fixes the handling of spaces in the path to the jvm.options\nfile on Windows. The issue is that the extraneous set of quotes were\nincluded as part of the value of ES_JVM_OPTIONS thus confusing further\ndownstream commands." } ], "files": [ { "diff": "@@ -62,7 +62,7 @@ SET HOSTNAME=%COMPUTERNAME%\n \n if \"%ES_JVM_OPTIONS%\" == \"\" (\n rem '0' is the batch file, '~dp' appends the drive and path\n-set ES_JVM_OPTIONS=\"%~dp0\\..\\config\\jvm.options\"\n+set ES_JVM_OPTIONS=%~dp0\\..\\config\\jvm.options\n )\n \n @setlocal", "filename": "distribution/src/main/resources/bin/elasticsearch.bat", "status": "modified" }, { "diff": "@@ -168,7 +168,7 @@ if exist \"%JAVA_HOME%\"\\bin\\client\\jvm.dll (\n \n :foundJVM\n if \"%ES_JVM_OPTIONS%\" == \"\" (\n-set ES_JVM_OPTIONS=\"%ES_HOME%\\config\\jvm.options\"\n+set ES_JVM_OPTIONS=%ES_HOME%\\config\\jvm.options\n )\n \n if not \"%ES_JAVA_OPTS%\" == \"\" set ES_JAVA_OPTS=%ES_JAVA_OPTS: =;%", "filename": "distribution/src/main/resources/bin/service.bat", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.3.4\n\n**JVM version**: 1.8.0_92\n\n**OS version**: Debian Jessie\n\n**Description of the problem including expected versus actual behavior**:\nSimilar to #17025\n\nSlow query logging in 2.3.4 seems a little broken:\n\n```\n[2016-08-02 01:15:51,840][INFO ][index.search.slowlog.query] [test_index]took[10.1ms], took_millis[10], types[search], stats[], search_type[QUERY_THEN_FETCH], total_shards[4], source[{\"query\":{\"bool\":{\"should\":[{\"filtered\":{\"query\":{\"bool\":{\"should\":[{\"simple_query_string\":{\"query\":\"indoor\",\"fields\":[\"name^2\",\"name.partial\",\"description\",\"sku^3\",\"sku.partial\",\"keywords^2\",\"keywords.partial\",\"brand_name^2\"]}},{\"prefix\":{\"sku\":\"indoor\"}},{\"prefix\":{\"keywords\":\"indoor\"}}]}},\"filter\":{\"bool\":{\"must\":[{\"term\":{\"store_id\":10115937}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"type\":{\"value\":\"product\"}}]}}}},{\"filtered\":{\"query\":{\"simple_query_string\":{\"query\":\"indoor\",\"fields\":[\"name^2\",\"name.partial\",\"content\",\"page_title^3\",\"keywords^2\",\"meta_description\"]}},\"filter\":{\"bool\":{\"must\":[{\"term\":{\"store_id\":10115937}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"term\":{\"customers_only\":false}},{\"term\":{\"is_visible\":true}},{\"type\":{\"value\":\"page\"}}]}}}},{\"filtered\":{\"query\":{\"simple_query_string\":{\"query\":\"indoor\",\"fields\":[\"title^2\",\"title.partial\",\"content\",\"keywords^2\"]}},\"filter\":{\"bool\":{\"must\":[{\"term\":{\"store_id\":10115937}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"type\":{\"value\":\"post\"}}]}}}}]}},\"from\":0,\"size\":5}], extra_source[],\n```\n\nThis bit confuses me a little: `[test_index]took[10.1ms]` \nThere is no space here, there is no shard ID and the hostname is now missing from the log entry.\n\nI would expect it to look like this (as per previous versions):\n\n```\n[2016-08-02 01:15:51,840][INFO ][index.search.slowlog.query] [myhostname] [test_index][12] took[10.1ms], took_millis[10], types[search], stats[], search_type[QUERY_THEN_FETCH], total_shards[4], source[{\"query\":{\"bool\":{\"should\":[{\"filtered\":{\"query\":{\"bool\":{\"should\":[{\"simple_query_string\":{\"query\":\"indoor\",\"fields\":[\"name^2\",\"name.partial\",\"description\",\"sku^3\",\"sku.partial\",\"keywords^2\",\"keywords.partial\",\"brand_name^2\"]}},{\"prefix\":{\"sku\":\"indoor\"}},{\"prefix\":{\"keywords\":\"indoor\"}}]}},\"filter\":{\"bool\":{\"must\":[{\"term\":{\"store_id\":10115937}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"type\":{\"value\":\"product\"}}]}}}},{\"filtered\":{\"query\":{\"simple_query_string\":{\"query\":\"indoor\",\"fields\":[\"name^2\",\"name.partial\",\"content\",\"page_title^3\",\"keywords^2\",\"meta_description\"]}},\"filter\":{\"bool\":{\"must\":[{\"term\":{\"store_id\":10115937}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"term\":{\"customers_only\":false}},{\"term\":{\"is_visible\":true}},{\"type\":{\"value\":\"page\"}}]}}}},{\"filtered\":{\"query\":{\"simple_query_string\":{\"query\":\"indoor\",\"fields\":[\"title^2\",\"title.partial\",\"content\",\"keywords^2\"]}},\"filter\":{\"bool\":{\"must\":[{\"term\":{\"store_id\":10115937}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"term\":{\"is_visible\":true}},{\"type\":{\"value\":\"post\"}}]}}}}]}},\"from\":0,\"size\":5}], extra_source[],\n```\n\n(Note the shard ID after the index name `[test_index][12]`)\n\n**Steps to reproduce**:\n1. Enable slow query logs with default `logging.yml`\n2. Look at entries in slow query logs\n\n**Provide logs (if relevant)**:\n(as above)\n", "comments": [ { "body": "@jimferenczi could you take a look at this please?\n", "created_at": "2016-08-11T11:22:17Z" }, { "body": "I opened #19949 to add the shard id in the message. Not sure about the hostname. We don't print it since 2.x and the search slow log should be printed on the node that is responsible of the slow search. IMO this is not needed, @clintongormley WDYT ? \n", "created_at": "2016-08-11T13:04:23Z" } ], "number": 19735, "title": "Hostname and shard ID missing from slow log" }
{ "body": "Add the shard ID and the node name in the output of the search slow log.\nThis change outputs '[nodeName] [indexName][shardId]' instead of [indexName/indexUUID]\n\ncloses #19735\n", "number": 19949, "review_comments": [], "title": "Add shardId and node name in search slow log" }
{ "commits": [ { "message": "Add the shard ID and the node name in the output of the search slow log.\nThis change outputs '[nodeName] [indexName][shardId]' instead of [indexName/indexUUID]\n\ncloses #19735" } ], "files": [ { "diff": "@@ -33,7 +33,6 @@\n /**\n */\n public final class SearchSlowLog implements SearchOperationListener {\n- private final Index index;\n private boolean reformat;\n \n private long queryWarnThreshold;\n@@ -84,10 +83,8 @@ public final class SearchSlowLog implements SearchOperationListener {\n \n public SearchSlowLog(IndexSettings indexSettings) {\n \n- this.queryLogger = Loggers.getLogger(INDEX_SEARCH_SLOWLOG_PREFIX + \".query\");\n- this.fetchLogger = Loggers.getLogger(INDEX_SEARCH_SLOWLOG_PREFIX + \".fetch\");\n-\n- this.index = indexSettings.getIndex();\n+ this.queryLogger = Loggers.getLogger(INDEX_SEARCH_SLOWLOG_PREFIX + \".query\", indexSettings.getSettings());\n+ this.fetchLogger = Loggers.getLogger(INDEX_SEARCH_SLOWLOG_PREFIX + \".fetch\", indexSettings.getSettings());\n \n indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_SEARCH_SLOWLOG_REFORMAT, this::setReformat);\n this.reformat = indexSettings.getValue(INDEX_SEARCH_SLOWLOG_REFORMAT);\n@@ -122,46 +119,44 @@ private void setLevel(SlowLogLevel level) {\n @Override\n public void onQueryPhase(SearchContext context, long tookInNanos) {\n if (queryWarnThreshold >= 0 && tookInNanos > queryWarnThreshold) {\n- queryLogger.warn(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n+ queryLogger.warn(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n } else if (queryInfoThreshold >= 0 && tookInNanos > queryInfoThreshold) {\n- queryLogger.info(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n+ queryLogger.info(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n } else if (queryDebugThreshold >= 0 && tookInNanos > queryDebugThreshold) {\n- queryLogger.debug(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n+ queryLogger.debug(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n } else if (queryTraceThreshold >= 0 && tookInNanos > queryTraceThreshold) {\n- queryLogger.trace(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n+ queryLogger.trace(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n }\n }\n \n @Override\n public void onFetchPhase(SearchContext context, long tookInNanos) {\n if (fetchWarnThreshold >= 0 && tookInNanos > fetchWarnThreshold) {\n- fetchLogger.warn(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n+ fetchLogger.warn(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n } else if (fetchInfoThreshold >= 0 && tookInNanos > fetchInfoThreshold) {\n- fetchLogger.info(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n+ fetchLogger.info(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n } else if (fetchDebugThreshold >= 0 && tookInNanos > fetchDebugThreshold) {\n- fetchLogger.debug(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n+ fetchLogger.debug(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n } else if (fetchTraceThreshold >= 0 && tookInNanos > fetchTraceThreshold) {\n- fetchLogger.trace(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n+ fetchLogger.trace(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n }\n }\n \n static final class SlowLogSearchContextPrinter {\n private final SearchContext context;\n- private final Index index;\n private final long tookInNanos;\n private final boolean reformat;\n \n- public SlowLogSearchContextPrinter(Index index, SearchContext context, long tookInNanos, boolean reformat) {\n+ public SlowLogSearchContextPrinter(SearchContext context, long tookInNanos, boolean reformat) {\n this.context = context;\n- this.index = index;\n this.tookInNanos = tookInNanos;\n this.reformat = reformat;\n }\n \n @Override\n public String toString() {\n StringBuilder sb = new StringBuilder();\n- sb.append(index).append(\" \");\n+ sb.append(context.indexShard().shardId()).append(\" \");\n sb.append(\"took[\").append(TimeValue.timeValueNanos(tookInNanos)).append(\"], took_millis[\").append(TimeUnit.NANOSECONDS.toMillis(tookInNanos)).append(\"], \");\n if (context.getQueryShardContext().getTypes() == null) {\n sb.append(\"types[], \");", "filename": "core/src/main/java/org/elasticsearch/index/SearchSlowLog.java", "status": "modified" }, { "diff": "@@ -41,7 +41,6 @@\n \n import static org.hamcrest.Matchers.startsWith;\n \n-\n public class SearchSlowLogTests extends ESSingleNodeTestCase {\n @Override\n protected SearchContext createSearchContext(IndexService indexService) {\n@@ -54,7 +53,7 @@ public ShardSearchRequest request() {\n return new ShardSearchRequest() {\n @Override\n public ShardId shardId() {\n- return null;\n+ return new ShardId(indexService.index(), 0);\n }\n \n @Override\n@@ -129,8 +128,8 @@ public void testSlowLogSearchContextPrinterToLog() throws IOException {\n IndexService index = createIndex(\"foo\");\n // Turning off document logging doesn't log source[]\n SearchContext searchContext = createSearchContext(index);\n- SearchSlowLog.SlowLogSearchContextPrinter p = new SearchSlowLog.SlowLogSearchContextPrinter(index.index(), searchContext, 10, true);\n- assertThat(p.toString(), startsWith(index.index().toString()));\n+ SearchSlowLog.SlowLogSearchContextPrinter p = new SearchSlowLog.SlowLogSearchContextPrinter(searchContext, 10, true);\n+ assertThat(p.toString(), startsWith(\"[foo][0]\"));\n }\n \n public void testReformatSetting() {", "filename": "core/src/test/java/org/elasticsearch/index/SearchSlowLogTests.java", "status": "modified" } ] }
{ "body": "If you run Elasticsearch with the ingest-attachment plugin:\n\n```\ngradle plugins:ingest-attachment:run\n```\n\nAnd then you use it on a document:\n\n``` js\nPUT _ingest/pipeline/attachment\n{\n \"description\" : \"Extract attachment information\",\n \"processors\" : [\n {\n \"attachment\" : {\n \"field\" : \"data\"\n }\n }\n ]\n}\nPUT my_index/my_type/my_id?pipeline=attachment\n{\n \"data\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\"\n}\nGET my_index/my_type/my_id\n```\n\nYou get this back:\n\n``` js\n# PUT _ingest/pipeline/attachment\n{\n \"acknowledged\": true\n}\n\n# PUT my_index/my_type/my_id?pipeline=attachment\n{\n \"_index\": \"my_index\",\n \"_type\": \"my_type\",\n \"_id\": \"my_id\",\n \"_version\": 2,\n \"result\": \"updated\",\n \"_shards\": {\n \"total\": 2,\n \"successful\": 1,\n \"failed\": 0\n },\n \"created\": false\n}\n\n# GET my_index/my_type/my_id\n{\n \"_index\": \"my_index\",\n \"_type\": \"my_type\",\n \"_id\": \"my_id\",\n \"_version\": 2,\n \"found\": true,\n \"_source\": {\n \"data\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n \"attachment\": {\n \"content_type\": \"application/rtf\",\n \"language\": \"ro\",\n \"content\": \"Lorem ipsum dolor sit amet\",\n \"content_length\": \"28\"\n }\n }\n}\n```\n\nThis part seems wrong\n\n```\n \"content_length\": \"28\"\n```\n\n`content_length` feels like a number rather than a string. That'd make its dynamic mapping type a number which'd be useful for things like range queries.\n", "comments": [], "number": 19924, "title": "ingest-attachment adds content-length as string" }
{ "body": "If you run Elasticsearch with the ingest-attachment plugin:\n\n``` sh\ngradle plugins:ingest-attachment:run\n```\n\nAnd then you use it on a document:\n\n``` js\n PUT _ingest/pipeline/attachment\n {\n \"description\" : \"Extract attachment information\",\n \"processors\" : [\n {\n \"attachment\" : {\n \"field\" : \"data\"\n }\n }\n ]\n }\n PUT my_index/my_type/my_id?pipeline=attachment\n {\n \"data\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\"\n }\n GET my_index/my_type/my_id\n```\n\n You were getting this back:\n\n``` js\n # PUT _ingest/pipeline/attachment\n {\n \"acknowledged\": true\n }\n\n # PUT my_index/my_type/my_id?pipeline=attachment\n {\n \"_index\": \"my_index\",\n \"_type\": \"my_type\",\n \"_id\": \"my_id\",\n \"_version\": 2,\n \"result\": \"updated\",\n \"_shards\": {\n \"total\": 2,\n \"successful\": 1,\n \"failed\": 0\n },\n \"created\": false\n }\n\n # GET my_index/my_type/my_id\n {\n \"_index\": \"my_index\",\n \"_type\": \"my_type\",\n \"_id\": \"my_id\",\n \"_version\": 2,\n \"found\": true,\n \"_source\": {\n \"data\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n \"attachment\": {\n \"content_type\": \"application/rtf\",\n \"language\": \"ro\",\n \"content\": \"Lorem ipsum dolor sit amet\",\n \"content_length\": \"28\"\n }\n }\n }\n```\n\nWith this commit you are now getting:\n\n``` js\n # GET my_index/my_type/my_id\n {\n \"_index\": \"my_index\",\n \"_type\": \"my_type\",\n \"_id\": \"my_id\",\n \"_version\": 2,\n \"found\": true,\n \"_source\": {\n \"data\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n \"attachment\": {\n \"content_type\": \"application/rtf\",\n \"language\": \"ro\",\n \"content\": \"Lorem ipsum dolor sit amet\",\n \"content_length\": 28\n }\n }\n }\n```\n\nCloses #19924\n", "number": 19927, "review_comments": [ { "body": "Maybe `try {length = Long.parseLong(contentLength);} catch (ParseException e) {logger.warn(\"Invalid content length header [{}] in [{}], using length of parsed document [{}] instead\", e, contentLength, someIdentifierIdunnoWhat, parsedContent.length()); length = parseContent.length();}`?\n\nI'm not sure we should blow up entirely if the header is garbage.\n", "created_at": "2016-08-10T16:41:59Z" }, { "body": "Hmm...shouldn't we? Isn't no leniency pretty much our modus operandi? And there's the ability to attach an on error processor?\n", "created_at": "2016-08-10T16:44:21Z" }, { "body": "Indeed.\n\nIn the former mapper-attachments plugin we were having a setting to ignore metadata errors but continue with text extraction.\nMay be we should do that (in another PR though).\n", "created_at": "2016-08-10T16:46:55Z" }, { "body": "> And there's the ability to attach an on error processor?\n\nThat'd mean throwing out all the data we extract because of one bad header. Maybe that is ok?\n\nI get the sense that we should be lenient with documents and metadata fields because I don't think they really have a specification. Or maybe not a common one. But I'm shooting from the hip here, just going on instinct based on working with POI years and years ago.\n\nAnother option is to simply not add a the field at all if it isn't a number. But that feels weird because we already add the field if it isn't present.\n", "created_at": "2016-08-10T16:50:01Z" }, { "body": "> May be we should do that (in another PR though).\n\nThat makes more sense to me. Leave the strictness as is and open an issue about missing that functionality from the old plugin?\n", "created_at": "2016-08-10T16:50:50Z" }, { "body": "I opened https://github.com/elastic/elasticsearch/issues/19928\n", "created_at": "2016-08-10T17:01:09Z" } ], "title": "Adds content-length as number" }
{ "commits": [ { "message": "Adds content-length as number\n\nIf you run Elasticsearch with the ingest-attachment plugin:\n\n```sh\ngradle plugins:ingest-attachment:run\n```\n\nAnd then you use it on a document:\n\n```js\n PUT _ingest/pipeline/attachment\n {\n \"description\" : \"Extract attachment information\",\n \"processors\" : [\n {\n \"attachment\" : {\n \"field\" : \"data\"\n }\n }\n ]\n }\n PUT my_index/my_type/my_id?pipeline=attachment\n {\n \"data\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\"\n }\n GET my_index/my_type/my_id\n```\n\n You were getting this back:\n\n```js\n # PUT _ingest/pipeline/attachment\n {\n \"acknowledged\": true\n }\n\n # PUT my_index/my_type/my_id?pipeline=attachment\n {\n \"_index\": \"my_index\",\n \"_type\": \"my_type\",\n \"_id\": \"my_id\",\n \"_version\": 2,\n \"result\": \"updated\",\n \"_shards\": {\n \"total\": 2,\n \"successful\": 1,\n \"failed\": 0\n },\n \"created\": false\n }\n\n # GET my_index/my_type/my_id\n {\n \"_index\": \"my_index\",\n \"_type\": \"my_type\",\n \"_id\": \"my_id\",\n \"_version\": 2,\n \"found\": true,\n \"_source\": {\n \"data\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n \"attachment\": {\n \"content_type\": \"application/rtf\",\n \"language\": \"ro\",\n \"content\": \"Lorem ipsum dolor sit amet\",\n \"content_length\": \"28\"\n }\n }\n }\n```\n\nWith this commit you are now getting:\n\n```\n # GET my_index/my_type/my_id\n {\n \"_index\": \"my_index\",\n \"_type\": \"my_type\",\n \"_id\": \"my_id\",\n \"_version\": 2,\n \"found\": true,\n \"_source\": {\n \"data\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n \"attachment\": {\n \"content_type\": \"application/rtf\",\n \"language\": \"ro\",\n \"content\": \"Lorem ipsum dolor sit amet\",\n \"content_length\": 28\n }\n }\n }\n```\n\nCloses #19924" }, { "message": "Merge branch 'master' into fix/19924-attachment" }, { "message": "Update documentation after merge with master" } ], "files": [ { "diff": "@@ -91,7 +91,7 @@ Returns this:\n \"content_type\": \"application/rtf\",\n \"language\": \"ro\",\n \"content\": \"Lorem ipsum dolor sit amet\",\n- \"content_length\": \"28\"\n+ \"content_length\": 28\n }\n }\n }", "filename": "docs/plugins/ingest-attachment.asciidoc", "status": "modified" }, { "diff": "@@ -119,7 +119,12 @@ public void execute(IngestDocument ingestDocument) {\n \n if (properties.contains(Property.CONTENT_LENGTH)) {\n String contentLength = metadata.get(Metadata.CONTENT_LENGTH);\n- String length = Strings.hasLength(contentLength) ? contentLength : String.valueOf(parsedContent.length());\n+ long length;\n+ if (Strings.hasLength(contentLength)) {\n+ length = Long.parseLong(contentLength);\n+ } else {\n+ length = parsedContent.length();\n+ }\n additionalFields.put(Property.CONTENT_LENGTH.toLowerCase(), length);\n }\n } catch (Exception e) {", "filename": "plugins/ingest-attachment/src/main/java/org/elasticsearch/ingest/attachment/AttachmentProcessor.java", "status": "modified" }, { "diff": "@@ -33,7 +33,7 @@\n - length: { _source.attachment: 4 }\n - match: { _source.attachment.content: \"This is an english text to test if the pipeline works\" }\n - match: { _source.attachment.language: \"en\" }\n- - match: { _source.attachment.content_length: \"54\" }\n+ - match: { _source.attachment.content_length: 54 }\n - match: { _source.attachment.content_type: \"text/plain; charset=ISO-8859-1\" }\n \n ---\n@@ -111,4 +111,4 @@\n - length: { _source.attachment: 4 }\n - match: { _source.attachment.content: \"This is an english text to tes\" }\n - match: { _source.attachment.language: \"en\" }\n- - match: { _source.attachment.content_length: \"30\" }\n+ - match: { _source.attachment.content_length: 30 }", "filename": "plugins/ingest-attachment/src/test/resources/rest-api-spec/test/ingest_attachment/20_attachment_processor.yaml", "status": "modified" }, { "diff": "@@ -34,7 +34,7 @@\n - match: { _source.attachment.language: \"et\" }\n - match: { _source.attachment.author: \"David Pilato\" }\n - match: { _source.attachment.date: \"2016-03-10T08:25:00Z\" }\n- - match: { _source.attachment.content_length: \"19\" }\n+ - match: { _source.attachment.content_length: 19 }\n - match: { _source.attachment.content_type: \"application/msword\" }\n \n \n@@ -74,6 +74,6 @@\n - match: { _source.attachment.language: \"et\" }\n - match: { _source.attachment.author: \"David Pilato\" }\n - match: { _source.attachment.date: \"2016-03-10T08:24:00Z\" }\n- - match: { _source.attachment.content_length: \"19\" }\n+ - match: { _source.attachment.content_length: 19 }\n - match: { _source.attachment.content_type: \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" }\n ", "filename": "plugins/ingest-attachment/src/test/resources/rest-api-spec/test/ingest_attachment/30_files_supported.yaml", "status": "modified" } ] }
{ "body": "**Elasticsearch version**:\n5.0.0-alpha4\n\n**JVM version**:\nOracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_101/25.101-b13\n\n**OS version**:\nLinux 107r01pc 4.2.0-42-generic #49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux\n\n**Description of the problem including expected versus actual behavior**:\nIn a groovy scripted update asserting while referencing variables and specifying a message will cause a class not found exception.\n\n**Steps to reproduce**:\n1. on a clean elasticsearch enable scripting\n2. run some curl commands, see below\n3. observe the results\n\n**Various commands**:\n1. `curl -XPOST 'http://localhost:9200/a/a/a/_update' -d '{\"scripted_upsert\": true, \"script\": {\"inline\": \"assert false, \\\"foo\\\";\"}, \"upsert\": {}}'`\n - `{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[Vertigo][127.0.0.1:9300][indices:data/write/update[s]]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"failed to execute script\",\"caused_by\":{\"type\":\"script_exception\",\"reason\":\"Error evaluating assert false, \\\"foo\\\";\",\"caused_by\":{\"type\":\"assertion_error\",\"reason\":\"foo. Expression: false\"},\"script_stack\":[],\"script\":\"\",\"lang\":\"groovy\"}},\"status\":400}`\n2. `curl -XPOST 'http://localhost:9200/a/a/b/_update' -d '{\"scripted_upsert\": true, \"script\": {\"inline\": \"def bar=false; assert bar, \\\"foo\\\";\"}, \"upsert\": {}}'`\n - `{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[Vertigo][127.0.0.1:9300][indices:data/write/update[s]]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"failed to execute script\",\"caused_by\":{\"type\":\"script_exception\",\"reason\":\"Error evaluating def bar=false; assert bar, \\\"foo\\\";\",\"caused_by\":{\"type\":\"no_class_def_found_error\",\"reason\":\"java/lang/StringBuffer\"},\"script_stack\":[],\"script\":\"\",\"lang\":\"groovy\"}},\"status\":400}`\n3. `curl -XPOST 'http://localhost:9200/a/a/b/_update' -d '{\"scripted_upsert\": true, \"script\": {\"inline\": \"def bar=false; assert bar;\"}, \"upsert\": {}}'`\n - `{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[Vertigo][127.0.0.1:9300][indices:data/write/update[s]]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"failed to execute script\",\"caused_by\":{\"type\":\"script_exception\",\"reason\":\"Error evaluating def bar=false; assert bar;\",\"caused_by\":{\"type\":\"power_assertion_error\",\"reason\":\"assert bar\\n |\\n false\"},\"script_stack\":[],\"script\":\"\",\"lang\":\"groovy\"}},\"status\":400}`\n\nSomewhat more clearly:\n1. using `assert false, \"message\"` works as expected\n2. using `assert var, \"message\"` throws a class not found exception somewhere down the stack\n3. using `assert var` works as expected\n\nI've tried this on a more recent snapshot (34bb1508637368c43b792992646a612bb8022e99) and the assert shows up in the log, but curl never prints anything nor exists, as if no response is generated.\nThis is the log:\n\n```\n[2016-08-04 18:07:53,886][ERROR][bootstrap ] [] fatal error in thread [elasticsearch[0ZFv9t-][index][T#1]], exiting\njava.lang.AssertionError: foo. Expression: false\n at org.codehaus.groovy.runtime.InvokerHelper.assertFailed(InvokerHelper.java:404)\n at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.assertFailed(ScriptBytecodeAdapter.java:650)\n at 20a61a940854553b62db2cfb00cb1511e91aac90.run(20a61a940854553b62db2cfb00cb1511e91aac90:1)\n at java.security.AccessController.doPrivileged(Native Method)\n at org.elasticsearch.script.groovy.GroovyScriptEngineService$GroovyScript.run(GroovyScriptEngineService.java:295)\n at org.elasticsearch.action.update.UpdateHelper.executeScript(UpdateHelper.java:253)\n at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:103)\n at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:80)\n at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:179)\n at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:172)\n at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:69)\n at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$ShardTransportHandler.messageReceived(TransportInstanceSingleOperationAction.java:247)\n at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$ShardTransportHandler.messageReceived(TransportInstanceSingleOperationAction.java:243)\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)\n at org.elasticsearch.transport.TransportService$5.doRun(TransportService.java:517)\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:510)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n\nException: java.security.AccessControlException thrown from the UncaughtExceptionHandler in thread \"elasticsearch[0ZFv9t-][index][T#1]\"\n```\n", "comments": [ { "body": "@SweetNSourPavement I was able to reproduce this, this looks pretty serious, it causes a thread to essentially hang forever on master. I'll work on a fix.\n", "created_at": "2016-08-08T21:47:40Z" } ], "number": 19806, "title": "Groovy assert misses StringBuffer" }
{ "body": "Today in the uncaught exception handler, we attempt to halt the virtual\nmachine on fatal errors. Yet, halting the virtual machine requires\nprivileges which might not be granted to the caller when the exception\nis thrown for example from a scripting engine. This means that if an\nOutOfMemoryError or another fatal error is hit inside a script, the\nvirtual machine will not exit because the halt call will be denied for\nsecuriry privileges. In this commit, we mark this halt call as trusted\nso that the virtual machine can be halted if a fatal error is\nencountered in a script.\n\nRelates #19272, relates #19806\n", "number": 19923, "review_comments": [], "title": "Mark halting the virtual machine as privileged" }
{ "commits": [ { "message": "Mark halting the virtual machine as privileged\n\nToday in the uncaught exception handler, we attempt to halt the virtual\nmachine on fatal errors. Yet, halting the virtual machine requires\nprivileges which might not be granted to the caller when the exception\nis thrown for example from a scripting engine. This means that if an\nOutOfMemoryError or another fatal error is hit inside a script, the\nvirtual machine will not exit because the halt call will be denied for\nsecuriry privileges. In this commit, we mark this halt call as trusted\nso that the virtual machine can be halted if a fatal error is\nencountered in a script." }, { "message": "Everything in its right place\n\nThis commit removes a suppress forbidden annotation to the right\nplace. This suppress forbidden annotation appeared in the wrong place\nafter the forbidden method that it was suppressing was moved to a method\non an anonymous class." } ], "files": [ { "diff": "@@ -25,6 +25,8 @@\n import org.elasticsearch.common.logging.Loggers;\n \n import java.io.IOError;\n+import java.security.AccessController;\n+import java.security.PrivilegedAction;\n import java.util.Objects;\n import java.util.function.Supplier;\n \n@@ -85,10 +87,16 @@ void onNonFatalUncaught(final String threadName, final Throwable t) {\n }\n \n // visible for testing\n- @SuppressForbidden(reason = \"halt\")\n void halt(int status) {\n- // we halt to prevent shutdown hooks from running\n- Runtime.getRuntime().halt(status);\n+ AccessController.doPrivileged(new PrivilegedAction<Void>() {\n+ @SuppressForbidden(reason = \"halt\")\n+ @Override\n+ public Void run() {\n+ // we halt to prevent shutdown hooks from running\n+ Runtime.getRuntime().halt(status);\n+ return null;\n+ }\n+ });\n }\n \n }", "filename": "core/src/main/java/org/elasticsearch/bootstrap/ElasticsearchUncaughtExceptionHandler.java", "status": "modified" } ] }