issue
dict
pr
dict
pr_details
dict
{ "body": "I was benchmarking ES indexing and noticed that IMC no longer asks shards to write indexing buffers to disk, which is horrible :)\n\nI dug a bit, and I see that IMC is passed an `Iterable<IndexShard> indexServices` to its ctor, which it uses to find all active shards. However, this iterable for some reason produces no shards.\n\nHere's the full stack trace to IMC init, so I think somehow guice is not giving IMC the right iterable or something? \n\n```\nnode0: java.lang.Throwable\nnode0: at org.elasticsearch.indices.IndexingMemoryController.<init>(IndexingMemoryController.java:104)\nnode0: at org.elasticsearch.indices.IndexingMemoryController.<init>(IndexingMemoryController.java:97)\nnode0: at org.elasticsearch.indices.IndicesService.<init>(IndicesService.java:184)\nnode0: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\nnode0: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\nnode0: at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\nnode0: at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\nnode0: at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:49)\nnode0: at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\nnode0: at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116)\nnode0: at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\nnode0: at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:886)\nnode0: at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\nnode0: at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\nnode0: at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:50)\nnode0: at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\nnode0: at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\nnode0: at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\nnode0: at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116)\nnode0: at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\nnode0: at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:886)\nnode0: at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\nnode0: at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\nnode0: at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:50)\nnode0: at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:205)\nnode0: at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:197)\nnode0: at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:879)\nnode0: at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:197)\nnode0: at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:187)\nnode0: at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)\nnode0: at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)\nnode0: at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:96)\nnode0: at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)\nnode0: at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:46)\nnode0: at org.elasticsearch.node.Node.<init>(Node.java:235)\nnode0: at org.elasticsearch.node.Node.<init>(Node.java:161)\nnode0: at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:188)\nnode0: at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:188)\nnode0: at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:261)\nnode0: at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:111)\nnode0: at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:106)\nnode0: at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:91)\nnode0: at org.elasticsearch.cli.Command.main(Command.java:53)\nnode0: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:74)\nnode0: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\n```\n", "comments": [ { "body": "OK I see we are in fact passing the `IndicesService` to IMC ctor ... I'll dig.\n", "created_at": "2016-05-14T16:48:12Z" } ], "number": 18353, "title": "IndexingMemoryController doesn't see any active shards?" }
{ "body": "This only affects 5.0.0.\n\nToday, `Iterables.flatten` pre-iterates its input and saves away a cached copy for later iteration, but for #18353 this is bad since it means we will never see new shards after the iterable was first created (on node startup).\n\nI reviewed the few other places that use `Iterables.flatten` today and they all are fine with delaying iteration from creation time to when the flattened iterator is later consumed.\n\nCloses #18353 \n", "number": 18355, "review_comments": [], "title": "Iterables.flatten should not pre-cache the first iterator" }
{ "commits": [ { "message": "Iterables.flatten should not pre-cache the first iterator" }, { "message": "improve javadocs" }, { "message": "leave Iterables.flatten pre-caching the outer Iterable" } ], "files": [ { "diff": "@@ -54,6 +54,8 @@ public Iterator<T> iterator() {\n }\n }\n \n+ /** Flattens the two level {@code Iterable} into a single {@code Iterable}. Note that this pre-caches the values from the outer {@code\n+ * Iterable}, but not the values from the inner one. */\n public static <T> Iterable<T> flatten(Iterable<? extends Iterable<T>> inputs) {\n Objects.requireNonNull(inputs);\n return new FlattenedIterables<>(inputs);", "filename": "core/src/main/java/org/elasticsearch/common/util/iterable/Iterables.java", "status": "modified" }, { "diff": "@@ -93,7 +93,7 @@ public class IndexingMemoryController extends AbstractComponent implements Index\n \n private final ShardsIndicesStatusChecker statusChecker;\n \n- IndexingMemoryController(Settings settings, ThreadPool threadPool, Iterable<IndexShard>indexServices) {\n+ IndexingMemoryController(Settings settings, ThreadPool threadPool, Iterable<IndexShard> indexServices) {\n this(settings, threadPool, indexServices, JvmInfo.jvmInfo().getMem().getHeapMax().bytes());\n }\n ", "filename": "core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java", "status": "modified" }, { "diff": "@@ -181,7 +181,9 @@ public IndicesService(Settings settings, PluginsService pluginsService, NodeEnvi\n this.namedWriteableRegistry = namedWriteableRegistry;\n clusterSettings.addSettingsUpdateConsumer(IndexStoreConfig.INDICES_STORE_THROTTLE_TYPE_SETTING, indexStoreConfig::setRateLimitingType);\n clusterSettings.addSettingsUpdateConsumer(IndexStoreConfig.INDICES_STORE_THROTTLE_MAX_BYTES_PER_SEC_SETTING, indexStoreConfig::setRateLimitingThrottle);\n- indexingMemoryController = new IndexingMemoryController(settings, threadPool, Iterables.flatten(this));\n+ indexingMemoryController = new IndexingMemoryController(settings, threadPool,\n+ // ensure we pull an iter with new shards - flatten makes a copy\n+ () -> Iterables.flatten(this).iterator());\n this.indexScopeSetting = indexScopedSettings;\n this.circuitBreakerService = circuitBreakerService;\n this.indicesFieldDataCache = new IndicesFieldDataCache(settings, new IndexFieldDataCache.Listener() {", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -19,12 +19,14 @@\n \n package org.elasticsearch.common.util.iterable;\n \n-import org.elasticsearch.test.ESTestCase;\n-\n+import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Iterator;\n+import java.util.List;\n import java.util.NoSuchElementException;\n \n+import org.elasticsearch.test.ESTestCase;\n+\n import static org.hamcrest.object.HasToString.hasToString;\n \n public class IterablesTests extends ESTestCase {\n@@ -56,6 +58,34 @@ public String next() {\n test(iterable);\n }\n \n+ public void testFlatten() {\n+ List<List<Integer>> list = new ArrayList<>();\n+ list.add(new ArrayList<>());\n+\n+ Iterable<Integer> allInts = Iterables.flatten(list);\n+ int count = 0;\n+ for(int x : allInts) {\n+ count++;\n+ }\n+ assertEquals(0, count);\n+ list.add(new ArrayList<>());\n+ list.get(1).add(0);\n+\n+ // changes to the outer list are not seen since flatten pre-caches outer list on init:\n+ count = 0;\n+ for(int x : allInts) {\n+ count++;\n+ }\n+ assertEquals(0, count);\n+\n+ // but changes to the original inner lists are seen:\n+ list.get(0).add(0);\n+ for(int x : allInts) {\n+ count++;\n+ }\n+ assertEquals(1, count);\n+ }\n+\n private void test(Iterable<String> iterable) {\n try {\n Iterables.get(iterable, -1);\n@@ -73,4 +103,4 @@ private void test(Iterable<String> iterable) {\n assertThat(e, hasToString(\"java.lang.IndexOutOfBoundsException: 3\"));\n }\n }\n-}\n\\ No newline at end of file\n+}", "filename": "core/src/test/java/org/elasticsearch/common/util/iterable/IterablesTests.java", "status": "modified" } ] }
{ "body": "Test failure:\nhttps://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-intake/518/testReport/junit/org.elasticsearch.cluster.routing/DelayedAllocationIT/testDelayedAllocationChangeWithSettingTo100ms/\n\nScenario that explains situation:\n- Due to node failing, shard allocation is delayed for one minute\n- 30 seconds later, user updates delayed shard allocation to 40 second for this index -> This should allocate the shard in 10 seconds, BUT:\n- While the reroute step is called during the update of the settings, shard fetching is still happening (or any other kind of reason that makes ReplicaShardAllocator call removeAndIgnore)\n- This means that the setting is updated, but routing table is not (as we only update routing table if shard is delayed (which is not in this case, as the shard-fetching check comes first).\n- The delay in the UnassignedInfo is still marked as 1 minute.\n- RoutingService.clusterChanged checks findSmallestDelayedAllocationSettingNanos which returns 40 seconds. As this is smaller than the previous setting (which was 1 minute), it cancels existing delayed reroute, and schedules a new one (it sets minDelaySettingAtLastSchedulingNanos to 40 seconds). To determine the delay it looks at the delay stored in the shards, and only finds 1 minute delays (as UnassignedInfo was not updated), so it schedules next reroute in 1 minute (this means that the original delay is even extended by 30 seconds).\n- Shard fetching is finished (2 seconds later), and a reroute is done. Here we update the delay in UnassignedInfo to 8 seconds. The routing table is now correctly updated, BUT RoutingService does not react properly to it. It compares minDelaySettingAtLastSchedulingNanos (previously set to 40 seconds with current value of findSmallestDelayedAllocationSettingNanos which still returns 40 seconds). As such it will not reschedule.\n- This means that the shard will only be reallocated one minute and a half after node crashed unless the user updates delayed shard allocation for the index to a shorter time.\n", "comments": [], "number": 18293, "title": "Decreasing delayed allocation timeout while shard fetching can lead to longer delay" }
{ "body": "This PR simplifies the delayed shard allocation implementation by assigning clear responsibilities to the various components that are affected by delayed shard allocation:\n- `UnassignedInfo` gets a boolean flag `delayed` which determines whether assignment of the shard should be delayed. The flag gets persisted in the cluster state and is thus available across nodes, i.e. each node knows whether a shard was delayed-unassigned in a specific cluster state. Before, nodes other than the current master were unaware of that information.\n- This flag is initially set as `true` if the shard becomes unassigned due to a node leaving and the index setting `index.unassigned.node_left.delayed_timeout` being strictly positive. From then on, unassigned shards can only transition from delayed to non-delayed, never in the other direction.\n- The reroute step is in charge of removing the delay marker (comparing timestamp when node left to current timestamp). \n- A dedicated service `DelayedAllocationService`, reacting to cluster change events, has the responsibility to schedule reroutes to remove the delay marker.\n\nRelates to #18293\n", "number": 18351, "review_comments": [ { "body": "we don't need the thread pool anymore\n", "created_at": "2016-05-20T12:44:31Z" }, { "body": "I'm thinking some more about scheduling first and making it visible second and having second doubts on that one. Looking at the code again , why do we need to schedule first/what's the down side of doing it after setting delayedRerouteTask, so we know removeTaskAndCancel/removeIfSameTask work?\n", "created_at": "2016-05-25T12:29:17Z" }, { "body": "Yes, we can do it the other way around. We know that close will be run after scheduleIfNeeded, and never the other way around.\n", "created_at": "2016-05-25T12:44:40Z" }, { "body": "This should just be `assert currentTask == null;` now as we `removeTaskAndCancel();` a few lines above.\n", "created_at": "2016-05-25T12:45:42Z" }, { "body": "we don't really need this variant do we?\n", "created_at": "2016-05-25T12:59:49Z" }, { "body": "we always log this after building the change result, maybe fold it into the buildChangedResult method and call it buildResultAndLogHealthChange?\n", "created_at": "2016-05-25T13:01:44Z" }, { "body": "since withReroute is always true, and since reroute() also removes delay markes, do we really need this method? can't we just call reroute?\n", "created_at": "2016-05-25T13:04:31Z" }, { "body": "I expect to be tested as it's static, but I don't see test? I think its good to have one. Also, it seems it can be package private.\n", "created_at": "2016-05-25T13:06:19Z" }, { "body": "This makes delayed scheduling efficient (same approach as for Shard started / shard failed). If we haven't removed a delay marker, we don't need to go through reroute and do all the shard balancing...\n", "created_at": "2016-05-25T13:09:40Z" }, { "body": "this one is sneaky :)\n", "created_at": "2016-05-25T13:50:29Z" }, { "body": "I like that change in particular because it brings us in line with how we handle `unassignedIterator.removeAndIgnore()` in all other cases (we never set changed to true just for ignoring a shard).\n", "created_at": "2016-05-25T13:56:57Z" }, { "body": "I'm wondering if we want to randomly add a third node and see that we still allocate (not only say \"delayed\" is false) - maybe this is covered by another test. If so , no need for this here\n", "created_at": "2016-05-25T14:05:19Z" }, { "body": "can we assign `stateWithDelayedShard.getRoutingNodes().unassigned().iterator().next().unassignedInfo()` into some temp variable to make this readable?\n", "created_at": "2016-05-25T14:11:45Z" }, { "body": "mocking FTW!\n", "created_at": "2016-05-25T14:12:00Z" }, { "body": "can we randomly test another similar path where a shard from another index becomes unassinged and have a shorter delay? this should reschedule as well..\n", "created_at": "2016-05-25T14:18:32Z" }, { "body": "that's OK because of the fact that this run by a single thread, but it will be easier on the eye to use:\n\n```\nexistingTask.cancel()\n```\n\ninstead of removeTaskAndCancel()\n", "created_at": "2016-05-25T14:26:48Z" }, { "body": "it's different because applyStartedShards and applyFailedShard are doing something that isn't done in the reroute. Also this method is called when we are 99% sure that something is going to happen. I don't think it's worth the extra code path.\n", "created_at": "2016-05-25T14:29:29Z" }, { "body": "good idea!\n", "created_at": "2016-05-25T16:39:17Z" } ], "title": "Simplify delayed shard allocation" }
{ "commits": [ { "message": "Simplify delayed shard allocation" } ], "files": [ { "diff": "@@ -45,17 +45,19 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable\n private final boolean hasPendingAsyncFetch;\n private final String assignedNodeId;\n private final UnassignedInfo unassignedInfo;\n+ private final long allocationDelayMillis;\n private final long remainingDelayMillis;\n private final Map<DiscoveryNode, NodeExplanation> nodeExplanations;\n \n- public ClusterAllocationExplanation(ShardId shard, boolean primary, @Nullable String assignedNodeId, long remainingDelayMillis,\n- @Nullable UnassignedInfo unassignedInfo, boolean hasPendingAsyncFetch,\n+ public ClusterAllocationExplanation(ShardId shard, boolean primary, @Nullable String assignedNodeId, long allocationDelayMillis,\n+ long remainingDelayMillis, @Nullable UnassignedInfo unassignedInfo, boolean hasPendingAsyncFetch,\n Map<DiscoveryNode, NodeExplanation> nodeExplanations) {\n this.shard = shard;\n this.primary = primary;\n this.hasPendingAsyncFetch = hasPendingAsyncFetch;\n this.assignedNodeId = assignedNodeId;\n this.unassignedInfo = unassignedInfo;\n+ this.allocationDelayMillis = allocationDelayMillis;\n this.remainingDelayMillis = remainingDelayMillis;\n this.nodeExplanations = nodeExplanations;\n }\n@@ -66,6 +68,7 @@ public ClusterAllocationExplanation(StreamInput in) throws IOException {\n this.hasPendingAsyncFetch = in.readBoolean();\n this.assignedNodeId = in.readOptionalString();\n this.unassignedInfo = in.readOptionalWriteable(UnassignedInfo::new);\n+ this.allocationDelayMillis = in.readVLong();\n this.remainingDelayMillis = in.readVLong();\n \n int mapSize = in.readVInt();\n@@ -84,6 +87,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBoolean(this.isStillFetchingShardData());\n out.writeOptionalString(this.getAssignedNodeId());\n out.writeOptionalWriteable(this.getUnassignedInfo());\n+ out.writeVLong(allocationDelayMillis);\n out.writeVLong(remainingDelayMillis);\n \n out.writeVInt(this.nodeExplanations.size());\n@@ -124,7 +128,12 @@ public UnassignedInfo getUnassignedInfo() {\n return this.unassignedInfo;\n }\n \n- /** Return the remaining allocation delay for this shard in millisocends */\n+ /** Return the configured delay before the shard can be allocated in milliseconds */\n+ public long getAllocationDelayMillis() {\n+ return this.allocationDelayMillis;\n+ }\n+\n+ /** Return the remaining allocation delay for this shard in milliseconds */\n public long getRemainingDelayMillis() {\n return this.remainingDelayMillis;\n }\n@@ -152,8 +161,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n // If we have unassigned info, show that\n if (unassignedInfo != null) {\n unassignedInfo.toXContent(builder, params);\n- long delay = unassignedInfo.getLastComputedLeftDelayNanos();\n- builder.timeValueField(\"allocation_delay_in_millis\", \"allocation_delay\", TimeValue.timeValueNanos(delay));\n+ builder.timeValueField(\"allocation_delay_in_millis\", \"allocation_delay\", TimeValue.timeValueMillis(allocationDelayMillis));\n builder.timeValueField(\"remaining_delay_in_millis\", \"remaining_delay\", TimeValue.timeValueMillis(remainingDelayMillis));\n }\n builder.startObject(\"nodes\");", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplanation.java", "status": "modified" }, { "diff": "@@ -59,6 +59,8 @@\n import java.util.Map;\n import java.util.Set;\n \n+import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING;\n+\n /**\n * The {@code TransportClusterAllocationExplainAction} is responsible for actually executing the explanation of a shard's allocation on the\n * master node in the cluster.\n@@ -237,9 +239,9 @@ public static ClusterAllocationExplanation explainShard(ShardRouting shard, Rout\n long remainingDelayMillis = 0;\n final MetaData metadata = allocation.metaData();\n final IndexMetaData indexMetaData = metadata.index(shard.index());\n- if (ui != null) {\n- final Settings indexSettings = indexMetaData.getSettings();\n- long remainingDelayNanos = ui.getRemainingDelay(System.nanoTime(), metadata.settings(), indexSettings);\n+ long allocationDelayMillis = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexMetaData.getSettings()).getMillis();\n+ if (ui != null && ui.isDelayed()) {\n+ long remainingDelayNanos = ui.getRemainingDelay(System.nanoTime(), indexMetaData.getSettings());\n remainingDelayMillis = TimeValue.timeValueNanos(remainingDelayNanos).millis();\n }\n \n@@ -262,8 +264,9 @@ public static ClusterAllocationExplanation explainShard(ShardRouting shard, Rout\n allocation.hasPendingAsyncFetch());\n explanations.put(node, nodeExplanation);\n }\n- return new ClusterAllocationExplanation(shard.shardId(), shard.primary(), shard.currentNodeId(),\n- remainingDelayMillis, ui, gatewayAllocator.hasFetchPending(shard.shardId(), shard.primary()), explanations);\n+ return new ClusterAllocationExplanation(shard.shardId(), shard.primary(),\n+ shard.currentNodeId(), allocationDelayMillis, remainingDelayMillis, ui,\n+ gatewayAllocator.hasFetchPending(shard.shardId(), shard.primary()), explanations);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.cluster.metadata.MetaDataMappingService;\n import org.elasticsearch.cluster.metadata.MetaDataUpdateSettingsService;\n import org.elasticsearch.cluster.node.DiscoveryNodeService;\n+import org.elasticsearch.cluster.routing.DelayedAllocationService;\n import org.elasticsearch.cluster.routing.OperationRouting;\n import org.elasticsearch.cluster.routing.RoutingService;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n@@ -151,6 +152,7 @@ protected void configure() {\n bind(MetaDataIndexTemplateService.class).asEagerSingleton();\n bind(IndexNameExpressionResolver.class).asEagerSingleton();\n bind(RoutingService.class).asEagerSingleton();\n+ bind(DelayedAllocationService.class).asEagerSingleton();\n bind(ShardStateAction.class).asEagerSingleton();\n bind(NodeIndexDeletedAction.class).asEagerSingleton();\n bind(NodeMappingRefreshAction.class).asEagerSingleton();", "filename": "core/src/main/java/org/elasticsearch/cluster/ClusterModule.java", "status": "modified" }, { "diff": "@@ -0,0 +1,225 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.routing;\n+\n+import org.elasticsearch.cluster.ClusterChangedEvent;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.ClusterStateListener;\n+import org.elasticsearch.cluster.ClusterStateUpdateTask;\n+import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n+import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.component.AbstractLifecycleComponent;\n+import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n+import org.elasticsearch.common.util.concurrent.FutureUtils;\n+import org.elasticsearch.threadpool.ThreadPool;\n+\n+import java.util.concurrent.ScheduledFuture;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+/**\n+ * The {@link DelayedAllocationService} listens to cluster state changes and checks\n+ * if there are unassigned shards with delayed allocation (unassigned shards that have\n+ * the delay marker). These are shards that have become unassigned due to a node leaving\n+ * and which were assigned the delay marker based on the index delay setting\n+ * {@link UnassignedInfo#INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING}\n+ * (see {@link AllocationService#deassociateDeadNodes(RoutingAllocation)}).\n+ * This class is responsible for choosing the next (closest) delay expiration of a\n+ * delayed shard to schedule a reroute to remove the delay marker.\n+ * The actual removal of the delay marker happens in\n+ * {@link AllocationService#removeDelayMarkers(RoutingAllocation)}, triggering yet\n+ * another cluster change event.\n+ */\n+public class DelayedAllocationService extends AbstractLifecycleComponent<DelayedAllocationService> implements ClusterStateListener {\n+\n+ static final String CLUSTER_UPDATE_TASK_SOURCE = \"delayed_allocation_reroute\";\n+\n+ final ThreadPool threadPool;\n+ private final ClusterService clusterService;\n+ private final AllocationService allocationService;\n+\n+ AtomicReference<DelayedRerouteTask> delayedRerouteTask = new AtomicReference<>(); // package private to access from tests\n+\n+ /**\n+ * represents a delayed scheduling of the reroute action that can be cancelled.\n+ */\n+ class DelayedRerouteTask extends ClusterStateUpdateTask {\n+ final TimeValue nextDelay; // delay until submitting the reroute command\n+ final long baseTimestampNanos; // timestamp (in nanos) upon which delay was calculated\n+ volatile ScheduledFuture future;\n+ final AtomicBoolean cancelScheduling = new AtomicBoolean();\n+\n+ DelayedRerouteTask(TimeValue nextDelay, long baseTimestampNanos) {\n+ this.nextDelay = nextDelay;\n+ this.baseTimestampNanos = baseTimestampNanos;\n+ }\n+\n+ public long scheduledTimeToRunInNanos() {\n+ return baseTimestampNanos + nextDelay.nanos();\n+ }\n+\n+ public void cancelScheduling() {\n+ cancelScheduling.set(true);\n+ FutureUtils.cancel(future);\n+ removeIfSameTask(this);\n+ }\n+\n+ public void schedule() {\n+ future = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n+ @Override\n+ protected void doRun() throws Exception {\n+ if (cancelScheduling.get()) {\n+ return;\n+ }\n+ clusterService.submitStateUpdateTask(CLUSTER_UPDATE_TASK_SOURCE, DelayedRerouteTask.this);\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ logger.warn(\"failed to submit schedule/execute reroute post unassigned shard\", t);\n+ removeIfSameTask(DelayedRerouteTask.this);\n+ }\n+ });\n+ }\n+\n+ @Override\n+ public ClusterState execute(ClusterState currentState) throws Exception {\n+ removeIfSameTask(this);\n+ RoutingAllocation.Result routingResult = allocationService.reroute(currentState, \"assign delayed unassigned shards\");\n+ if (routingResult.changed()) {\n+ return ClusterState.builder(currentState).routingResult(routingResult).build();\n+ } else {\n+ return currentState;\n+ }\n+ }\n+\n+ @Override\n+ public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n+ if (oldState == newState) {\n+ // no state changed, check when we should remove the delay flag from the shards the next time.\n+ // if cluster state changed, we can leave the scheduling of the next delay up to the clusterChangedEvent\n+ // this should not be needed, but we want to be extra safe here\n+ scheduleIfNeeded(currentNanoTime(), newState);\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(String source, Throwable t) {\n+ removeIfSameTask(this);\n+ logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n+ }\n+ }\n+\n+ @Inject\n+ public DelayedAllocationService(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n+ AllocationService allocationService) {\n+ super(settings);\n+ this.threadPool = threadPool;\n+ this.clusterService = clusterService;\n+ this.allocationService = allocationService;\n+ clusterService.addFirst(this);\n+ }\n+\n+ @Override\n+ protected void doStart() {\n+ }\n+\n+ @Override\n+ protected void doStop() {\n+ }\n+\n+ @Override\n+ protected void doClose() {\n+ clusterService.remove(this);\n+ removeTaskAndCancel();\n+ }\n+\n+ /** override this to control time based decisions during delayed allocation */\n+ protected long currentNanoTime() {\n+ return System.nanoTime();\n+ }\n+\n+ @Override\n+ public void clusterChanged(ClusterChangedEvent event) {\n+ long currentNanoTime = currentNanoTime();\n+ if (event.state().nodes().isLocalNodeElectedMaster()) {\n+ scheduleIfNeeded(currentNanoTime, event.state());\n+ }\n+ }\n+\n+ private void removeTaskAndCancel() {\n+ DelayedRerouteTask existingTask = delayedRerouteTask.getAndSet(null);\n+ if (existingTask != null) {\n+ logger.trace(\"cancelling existing delayed reroute task\");\n+ existingTask.cancelScheduling();\n+ }\n+ }\n+\n+ private void removeIfSameTask(DelayedRerouteTask expectedTask) {\n+ delayedRerouteTask.compareAndSet(expectedTask, null);\n+ }\n+\n+ /**\n+ * Figure out if an existing scheduled reroute is good enough or whether we need to cancel and reschedule.\n+ */\n+ private void scheduleIfNeeded(long currentNanoTime, ClusterState state) {\n+ assertClusterStateThread();\n+ long nextDelayNanos = UnassignedInfo.findNextDelayedAllocation(currentNanoTime, state);\n+ if (nextDelayNanos < 0) {\n+ logger.trace(\"no need to schedule reroute - no delayed unassigned shards\");\n+ removeTaskAndCancel();\n+ } else {\n+ TimeValue nextDelay = TimeValue.timeValueNanos(nextDelayNanos);\n+ final boolean earlierRerouteNeeded;\n+ DelayedRerouteTask existingTask = delayedRerouteTask.get();\n+ DelayedRerouteTask newTask = new DelayedRerouteTask(nextDelay, currentNanoTime);\n+ if (existingTask == null) {\n+ earlierRerouteNeeded = true;\n+ } else if (newTask.scheduledTimeToRunInNanos() < existingTask.scheduledTimeToRunInNanos()) {\n+ // we need an earlier delayed reroute\n+ logger.trace(\"cancelling existing delayed reroute task as delayed reroute has to happen [{}] earlier\",\n+ TimeValue.timeValueNanos(existingTask.scheduledTimeToRunInNanos() - newTask.scheduledTimeToRunInNanos()));\n+ existingTask.cancelScheduling();\n+ earlierRerouteNeeded = true;\n+ } else {\n+ earlierRerouteNeeded = false;\n+ }\n+\n+ if (earlierRerouteNeeded) {\n+ logger.info(\"scheduling reroute for delayed shards in [{}] ({} delayed shards)\", nextDelay,\n+ UnassignedInfo.getNumberOfDelayedUnassigned(state));\n+ DelayedRerouteTask currentTask = delayedRerouteTask.getAndSet(newTask);\n+ assert existingTask == currentTask || currentTask == null;\n+ newTask.schedule();\n+ } else {\n+ logger.trace(\"no need to reschedule delayed reroute - currently scheduled delayed reroute in [{}] is enough\", nextDelay);\n+ }\n+ }\n+ }\n+\n+ // protected so that it can be overridden (and disabled) by unit tests\n+ protected void assertClusterStateThread() {\n+ ClusterService.assertClusterStateThread();\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/DelayedAllocationService.java", "status": "added" }, { "diff": "@@ -21,7 +21,6 @@\n \n import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.ClusterStateListener;\n import org.elasticsearch.cluster.ClusterStateUpdateTask;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n@@ -30,12 +29,7 @@\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n-import org.elasticsearch.common.util.concurrent.FutureUtils;\n-import org.elasticsearch.threadpool.ThreadPool;\n \n-import java.util.concurrent.ScheduledFuture;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n /**\n@@ -50,27 +44,20 @@\n * actions.\n * </p>\n */\n-public class RoutingService extends AbstractLifecycleComponent<RoutingService> implements ClusterStateListener {\n+public class RoutingService extends AbstractLifecycleComponent<RoutingService> {\n \n private static final String CLUSTER_UPDATE_TASK_SOURCE = \"cluster_reroute\";\n \n- final ThreadPool threadPool;\n private final ClusterService clusterService;\n private final AllocationService allocationService;\n \n private AtomicBoolean rerouting = new AtomicBoolean();\n- private volatile long minDelaySettingAtLastSchedulingNanos = Long.MAX_VALUE;\n- private volatile ScheduledFuture registeredNextDelayFuture;\n \n @Inject\n- public RoutingService(Settings settings, ThreadPool threadPool, ClusterService clusterService, AllocationService allocationService) {\n+ public RoutingService(Settings settings, ClusterService clusterService, AllocationService allocationService) {\n super(settings);\n- this.threadPool = threadPool;\n this.clusterService = clusterService;\n this.allocationService = allocationService;\n- if (clusterService != null) {\n- clusterService.addFirst(this);\n- }\n }\n \n @Override\n@@ -83,8 +70,6 @@ protected void doStop() {\n \n @Override\n protected void doClose() {\n- FutureUtils.cancel(registeredNextDelayFuture);\n- clusterService.remove(this);\n }\n \n public AllocationService getAllocationService() {\n@@ -98,48 +83,6 @@ public final void reroute(String reason) {\n performReroute(reason);\n }\n \n- @Override\n- public void clusterChanged(ClusterChangedEvent event) {\n- if (event.state().nodes().isLocalNodeElectedMaster()) {\n- // Figure out if an existing scheduled reroute is good enough or whether we need to cancel and reschedule.\n- // If the minimum of the currently relevant delay settings is larger than something we scheduled in the past,\n- // we are guaranteed that the planned schedule will happen before any of the current shard delays are expired.\n- long minDelaySetting = UnassignedInfo.findSmallestDelayedAllocationSettingNanos(settings, event.state());\n- if (minDelaySetting <= 0) {\n- logger.trace(\"no need to schedule reroute - no delayed unassigned shards, minDelaySetting [{}], scheduled [{}]\", minDelaySetting, minDelaySettingAtLastSchedulingNanos);\n- minDelaySettingAtLastSchedulingNanos = Long.MAX_VALUE;\n- FutureUtils.cancel(registeredNextDelayFuture);\n- } else if (minDelaySetting < minDelaySettingAtLastSchedulingNanos) {\n- FutureUtils.cancel(registeredNextDelayFuture);\n- minDelaySettingAtLastSchedulingNanos = minDelaySetting;\n- TimeValue nextDelay = TimeValue.timeValueNanos(UnassignedInfo.findNextDelayedAllocationIn(event.state()));\n- assert nextDelay.nanos() > 0 : \"next delay must be non 0 as minDelaySetting is [\" + minDelaySetting + \"]\";\n- logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\",\n- UnassignedInfo.getNumberOfDelayedUnassigned(event.state()), nextDelay);\n- registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n- @Override\n- protected void doRun() throws Exception {\n- minDelaySettingAtLastSchedulingNanos = Long.MAX_VALUE;\n- reroute(\"assign delayed unassigned shards\");\n- }\n-\n- @Override\n- public void onFailure(Throwable t) {\n- logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n- minDelaySettingAtLastSchedulingNanos = Long.MAX_VALUE;\n- }\n- });\n- } else {\n- logger.trace(\"no need to schedule reroute - current schedule reroute is enough. minDelaySetting [{}], scheduled [{}]\", minDelaySetting, minDelaySettingAtLastSchedulingNanos);\n- }\n- }\n- }\n-\n- // visible for testing\n- long getMinDelaySettingAtLastSchedulingNanos() {\n- return this.minDelaySettingAtLastSchedulingNanos;\n- }\n-\n // visible for testing\n protected void performReroute(String reason) {\n try {", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java", "status": "modified" }, { "diff": "@@ -316,6 +316,7 @@ public void writeTo(StreamOutput out) throws IOException {\n \n public ShardRouting updateUnassignedInfo(UnassignedInfo unassignedInfo) {\n assert this.unassignedInfo != null : \"can only update unassign info if they are already set\";\n+ assert this.unassignedInfo.isDelayed() || (unassignedInfo.isDelayed() == false) : \"cannot transition from non-delayed to delayed\";\n return new ShardRouting(shardId, currentNodeId, relocatingNodeId, restoreSource, primary, state,\n unassignedInfo, allocationId, expectedShardSize);\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -43,10 +44,9 @@\n public final class UnassignedInfo implements ToXContent, Writeable {\n \n public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(\"dateOptionalTime\");\n- private static final TimeValue DEFAULT_DELAYED_NODE_LEFT_TIMEOUT = TimeValue.timeValueMinutes(1);\n \n public static final Setting<TimeValue> INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING =\n- Setting.timeSetting(\"index.unassigned.node_left.delayed_timeout\", DEFAULT_DELAYED_NODE_LEFT_TIMEOUT, Property.Dynamic,\n+ Setting.timeSetting(\"index.unassigned.node_left.delayed_timeout\", TimeValue.timeValueMinutes(1), Property.Dynamic,\n Property.IndexScope);\n /**\n * Reason why the shard is in unassigned state.\n@@ -112,19 +112,19 @@ public enum Reason {\n private final Reason reason;\n private final long unassignedTimeMillis; // used for display and log messages, in milliseconds\n private final long unassignedTimeNanos; // in nanoseconds, used to calculate delay for delayed shard allocation\n- private final long lastComputedLeftDelayNanos; // how long to delay shard allocation, not serialized (always positive, 0 means no delay)\n+ private final boolean delayed; // if allocation of this shard is delayed due to INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING\n private final String message;\n private final Throwable failure;\n private final int failedAllocations;\n \n /**\n- * creates an UnassingedInfo object based **current** time\n+ * creates an UnassignedInfo object based on **current** time\n *\n * @param reason the cause for making this shard unassigned. See {@link Reason} for more information.\n * @param message more information about cause.\n **/\n public UnassignedInfo(Reason reason, String message) {\n- this(reason, message, null, reason == Reason.ALLOCATION_FAILED ? 1 : 0, System.nanoTime(), System.currentTimeMillis());\n+ this(reason, message, null, reason == Reason.ALLOCATION_FAILED ? 1 : 0, System.nanoTime(), System.currentTimeMillis(), false);\n }\n \n /**\n@@ -133,28 +133,21 @@ public UnassignedInfo(Reason reason, String message) {\n * @param failure the shard level failure that caused this shard to be unassigned, if exists.\n * @param unassignedTimeNanos the time to use as the base for any delayed re-assignment calculation\n * @param unassignedTimeMillis the time of unassignment used to display to in our reporting.\n+ * @param delayed if allocation of this shard is delayed due to INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.\n */\n- public UnassignedInfo(Reason reason, @Nullable String message, @Nullable Throwable failure, int failedAllocations, long unassignedTimeNanos, long unassignedTimeMillis) {\n+ public UnassignedInfo(Reason reason, @Nullable String message, @Nullable Throwable failure, int failedAllocations,\n+ long unassignedTimeNanos, long unassignedTimeMillis, boolean delayed) {\n this.reason = reason;\n this.unassignedTimeMillis = unassignedTimeMillis;\n this.unassignedTimeNanos = unassignedTimeNanos;\n- this.lastComputedLeftDelayNanos = 0L;\n+ this.delayed = delayed;\n this.message = message;\n this.failure = failure;\n this.failedAllocations = failedAllocations;\n- assert (failedAllocations > 0) == (reason == Reason.ALLOCATION_FAILED):\n+ assert (failedAllocations > 0) == (reason == Reason.ALLOCATION_FAILED) :\n \"failedAllocations: \" + failedAllocations + \" for reason \" + reason;\n assert !(message == null && failure != null) : \"provide a message if a failure exception is provided\";\n- }\n-\n- public UnassignedInfo(UnassignedInfo unassignedInfo, long newComputedLeftDelayNanos) {\n- this.reason = unassignedInfo.reason;\n- this.unassignedTimeMillis = unassignedInfo.unassignedTimeMillis;\n- this.unassignedTimeNanos = unassignedInfo.unassignedTimeNanos;\n- this.lastComputedLeftDelayNanos = newComputedLeftDelayNanos;\n- this.message = unassignedInfo.message;\n- this.failure = unassignedInfo.failure;\n- this.failedAllocations = unassignedInfo.failedAllocations;\n+ assert !(delayed && reason != Reason.NODE_LEFT) : \"shard can only be delayed if it is unassigned due to a node leaving\";\n }\n \n public UnassignedInfo(StreamInput in) throws IOException {\n@@ -163,7 +156,7 @@ public UnassignedInfo(StreamInput in) throws IOException {\n // As System.nanoTime() cannot be compared across different JVMs, reset it to now.\n // This means that in master fail-over situations, elapsed delay time is forgotten.\n this.unassignedTimeNanos = System.nanoTime();\n- this.lastComputedLeftDelayNanos = 0L;\n+ this.delayed = in.readBoolean();\n this.message = in.readOptionalString();\n this.failure = in.readThrowable();\n this.failedAllocations = in.readVInt();\n@@ -173,6 +166,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeByte((byte) reason.ordinal());\n out.writeLong(unassignedTimeMillis);\n // Do not serialize unassignedTimeNanos as System.nanoTime() cannot be compared across different JVMs\n+ out.writeBoolean(delayed);\n out.writeOptionalString(message);\n out.writeThrowable(failure);\n out.writeVInt(failedAllocations);\n@@ -185,7 +179,16 @@ public UnassignedInfo readFrom(StreamInput in) throws IOException {\n /**\n * Returns the number of previously failed allocations of this shard.\n */\n- public int getNumFailedAllocations() { return failedAllocations; }\n+ public int getNumFailedAllocations() {\n+ return failedAllocations;\n+ }\n+\n+ /**\n+ * Returns true if allocation of this shard is delayed due to {@link #INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING}\n+ */\n+ public boolean isDelayed() {\n+ return delayed;\n+ }\n \n /**\n * The reason why the shard is unassigned.\n@@ -239,50 +242,16 @@ public String getDetails() {\n }\n \n /**\n- * The allocation delay value in nano seconds associated with the index (defaulting to node settings if not set).\n- */\n- public long getAllocationDelayTimeoutSettingNanos(Settings settings, Settings indexSettings) {\n- if (reason != Reason.NODE_LEFT) {\n- return 0;\n- }\n- TimeValue delayTimeout = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexSettings, settings);\n- return Math.max(0L, delayTimeout.nanos());\n- }\n-\n- /**\n- * The delay in nanoseconds until this unassigned shard can be reassigned. This value is cached and might be slightly out-of-date.\n- * See also the {@link #updateDelay(long, Settings, Settings)} method.\n- */\n- public long getLastComputedLeftDelayNanos() {\n- return lastComputedLeftDelayNanos;\n- }\n-\n- /**\n- * Calculates the delay left based on current time (in nanoseconds) and index/node settings.\n+ * Calculates the delay left based on current time (in nanoseconds) and the delay defined by the index settings.\n+ * Only relevant if shard is effectively delayed (see {@link #isDelayed()})\n+ * Returns 0 if delay is negative\n *\n * @return calculated delay in nanoseconds\n */\n- public long getRemainingDelay(final long nanoTimeNow, final Settings settings, final Settings indexSettings) {\n- final long delayTimeoutNanos = getAllocationDelayTimeoutSettingNanos(settings, indexSettings);\n- if (delayTimeoutNanos == 0L) {\n- return 0L;\n- } else {\n- assert nanoTimeNow >= unassignedTimeNanos;\n- return Math.max(0L, delayTimeoutNanos - (nanoTimeNow - unassignedTimeNanos));\n- }\n- }\n-\n- /**\n- * Creates new UnassignedInfo object if delay needs updating.\n- *\n- * @return new Unassigned with updated delay, or this if no change in delay\n- */\n- public UnassignedInfo updateDelay(final long nanoTimeNow, final Settings settings, final Settings indexSettings) {\n- final long newComputedLeftDelayNanos = getRemainingDelay(nanoTimeNow, settings, indexSettings);\n- if (lastComputedLeftDelayNanos == newComputedLeftDelayNanos) {\n- return this;\n- }\n- return new UnassignedInfo(this, newComputedLeftDelayNanos);\n+ public long getRemainingDelay(final long nanoTimeNow, final Settings indexSettings) {\n+ long delayTimeoutNanos = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexSettings).nanos();\n+ assert nanoTimeNow >= unassignedTimeNanos;\n+ return Math.max(0L, delayTimeoutNanos - (nanoTimeNow - unassignedTimeNanos));\n }\n \n /**\n@@ -291,49 +260,34 @@ public UnassignedInfo updateDelay(final long nanoTimeNow, final Settings setting\n public static int getNumberOfDelayedUnassigned(ClusterState state) {\n int count = 0;\n for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n- if (shard.primary() == false) {\n- long delay = shard.unassignedInfo().getLastComputedLeftDelayNanos();\n- if (delay > 0) {\n- count++;\n- }\n+ if (shard.unassignedInfo().isDelayed()) {\n+ count++;\n }\n }\n return count;\n }\n \n /**\n- * Finds the smallest delay expiration setting in nanos of all unassigned shards that are still delayed. Returns 0 if there are none.\n+ * Finds the next (closest) delay expiration of an delayed shard in nanoseconds based on current time.\n+ * Returns 0 if delay is negative.\n+ * Returns -1 if no delayed shard is found.\n */\n- public static long findSmallestDelayedAllocationSettingNanos(Settings settings, ClusterState state) {\n- long minDelaySetting = Long.MAX_VALUE;\n- for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n- if (shard.primary() == false) {\n- IndexMetaData indexMetaData = state.metaData().index(shard.getIndexName());\n- boolean delayed = shard.unassignedInfo().getLastComputedLeftDelayNanos() > 0;\n- long delayTimeoutSetting = shard.unassignedInfo().getAllocationDelayTimeoutSettingNanos(settings, indexMetaData.getSettings());\n- if (delayed && delayTimeoutSetting > 0 && delayTimeoutSetting < minDelaySetting) {\n- minDelaySetting = delayTimeoutSetting;\n+ public static long findNextDelayedAllocation(long currentNanoTime, ClusterState state) {\n+ MetaData metaData = state.metaData();\n+ RoutingTable routingTable = state.routingTable();\n+ long nextDelayNanos = Long.MAX_VALUE;\n+ for (ShardRouting shard : routingTable.shardsWithState(ShardRoutingState.UNASSIGNED)) {\n+ UnassignedInfo unassignedInfo = shard.unassignedInfo();\n+ if (unassignedInfo.isDelayed()) {\n+ Settings indexSettings = metaData.index(shard.index()).getSettings();\n+ // calculate next time to schedule\n+ final long newComputedLeftDelayNanos = unassignedInfo.getRemainingDelay(currentNanoTime, indexSettings);\n+ if (newComputedLeftDelayNanos < nextDelayNanos) {\n+ nextDelayNanos = newComputedLeftDelayNanos;\n }\n }\n }\n- return minDelaySetting == Long.MAX_VALUE ? 0L : minDelaySetting;\n- }\n-\n-\n- /**\n- * Finds the next (closest) delay expiration of an unassigned shard in nanoseconds. Returns 0 if there are none.\n- */\n- public static long findNextDelayedAllocationIn(ClusterState state) {\n- long nextDelay = Long.MAX_VALUE;\n- for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n- if (shard.primary() == false) {\n- long nextShardDelay = shard.unassignedInfo().getLastComputedLeftDelayNanos();\n- if (nextShardDelay > 0 && nextShardDelay < nextDelay) {\n- nextDelay = nextShardDelay;\n- }\n- }\n- }\n- return nextDelay == Long.MAX_VALUE ? 0L : nextDelay;\n+ return nextDelayNanos == Long.MAX_VALUE ? -1L : nextDelayNanos;\n }\n \n public String shortSummary() {\n@@ -343,6 +297,7 @@ public String shortSummary() {\n if (failedAllocations > 0) {\n sb.append(\", failed_attempts[\").append(failedAllocations).append(\"]\");\n }\n+ sb.append(\", delayed=\").append(delayed);\n String details = getDetails();\n \n if (details != null) {\n@@ -364,6 +319,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n if (failedAllocations > 0) {\n builder.field(\"failed_attempts\", failedAllocations);\n }\n+ builder.field(\"delayed\", delayed);\n String details = getDetails();\n if (details != null) {\n builder.field(\"details\", details);\n@@ -386,19 +342,26 @@ public boolean equals(Object o) {\n if (unassignedTimeMillis != that.unassignedTimeMillis) {\n return false;\n }\n+ if (delayed != that.delayed) {\n+ return false;\n+ }\n+ if (failedAllocations != that.failedAllocations) {\n+ return false;\n+ }\n if (reason != that.reason) {\n return false;\n }\n if (message != null ? !message.equals(that.message) : that.message != null) {\n return false;\n }\n return !(failure != null ? !failure.equals(that.failure) : that.failure != null);\n-\n }\n \n @Override\n public int hashCode() {\n int result = reason != null ? reason.hashCode() : 0;\n+ result = 31 * result + Boolean.hashCode(delayed);\n+ result = 31 * result + Integer.hashCode(failedAllocations);\n result = 31 * result + Long.hashCode(unassignedTimeMillis);\n result = 31 * result + (message != null ? message.hashCode() : 0);\n result = 31 * result + (failure != null ? failure.hashCode() : 0);", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java", "status": "modified" }, { "diff": "@@ -53,6 +53,8 @@\n import java.util.function.Function;\n import java.util.stream.Collectors;\n \n+import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING;\n+\n \n /**\n * This service manages the node allocation of a cluster. For this reason the\n@@ -90,7 +92,7 @@ public RoutingAllocation.Result applyStartedShards(ClusterState clusterState, Li\n RoutingNodes routingNodes = getMutableRoutingNodes(clusterState);\n // shuffle the unassigned nodes, just so we won't have things like poison failed shards\n routingNodes.unassigned().shuffle();\n- StartedRerouteAllocation allocation = new StartedRerouteAllocation(allocationDeciders, routingNodes, clusterState, startedShards, clusterInfoService.getClusterInfo());\n+ StartedRerouteAllocation allocation = new StartedRerouteAllocation(allocationDeciders, routingNodes, clusterState, startedShards, clusterInfoService.getClusterInfo(), currentNanoTime());\n boolean changed = applyStartedShards(routingNodes, startedShards);\n if (!changed) {\n return new RoutingAllocation.Result(false, clusterState.routingTable(), clusterState.metaData());\n@@ -99,28 +101,27 @@ public RoutingAllocation.Result applyStartedShards(ClusterState clusterState, Li\n if (withReroute) {\n reroute(allocation);\n }\n- final RoutingAllocation.Result result = buildChangedResult(clusterState.metaData(), clusterState.routingTable(), routingNodes);\n-\n String startedShardsAsString = firstListElementsToCommaDelimitedString(startedShards, s -> s.shardId().toString());\n- logClusterHealthStateChange(\n- new ClusterStateHealth(clusterState),\n- new ClusterStateHealth(clusterState.metaData(), result.routingTable()),\n- \"shards started [\" + startedShardsAsString + \"] ...\"\n- );\n- return result;\n-\n+ return buildResultAndLogHealthChange(allocation, \"shards started [\" + startedShardsAsString + \"] ...\");\n }\n \n- protected RoutingAllocation.Result buildChangedResult(MetaData oldMetaData, RoutingTable oldRoutingTable, RoutingNodes newRoutingNodes) {\n- return buildChangedResult(oldMetaData, oldRoutingTable, newRoutingNodes, new RoutingExplanations());\n+ protected RoutingAllocation.Result buildResultAndLogHealthChange(RoutingAllocation allocation, String reason) {\n+ return buildResultAndLogHealthChange(allocation, reason, new RoutingExplanations());\n \n }\n \n- protected RoutingAllocation.Result buildChangedResult(MetaData oldMetaData, RoutingTable oldRoutingTable, RoutingNodes newRoutingNodes,\n- RoutingExplanations explanations) {\n+ protected RoutingAllocation.Result buildResultAndLogHealthChange(RoutingAllocation allocation, String reason, RoutingExplanations explanations) {\n+ MetaData oldMetaData = allocation.metaData();\n+ RoutingTable oldRoutingTable = allocation.routingTable();\n+ RoutingNodes newRoutingNodes = allocation.routingNodes();\n final RoutingTable newRoutingTable = new RoutingTable.Builder().updateNodes(newRoutingNodes).build();\n MetaData newMetaData = updateMetaDataWithRoutingTable(oldMetaData, oldRoutingTable, newRoutingTable);\n assert newRoutingTable.validate(newMetaData); // validates the routing table is coherent with the cluster state metadata\n+ logClusterHealthStateChange(\n+ new ClusterStateHealth(allocation.metaData(), allocation.routingTable()),\n+ new ClusterStateHealth(newMetaData, newRoutingTable),\n+ reason\n+ );\n return new RoutingAllocation.Result(true, newRoutingTable, newMetaData, explanations);\n }\n \n@@ -216,7 +217,8 @@ public RoutingAllocation.Result applyFailedShards(ClusterState clusterState, Lis\n RoutingNodes routingNodes = getMutableRoutingNodes(clusterState);\n // shuffle the unassigned nodes, just so we won't have things like poison failed shards\n routingNodes.unassigned().shuffle();\n- FailedRerouteAllocation allocation = new FailedRerouteAllocation(allocationDeciders, routingNodes, clusterState, failedShards, clusterInfoService.getClusterInfo());\n+ long currentNanoTime = currentNanoTime();\n+ FailedRerouteAllocation allocation = new FailedRerouteAllocation(allocationDeciders, routingNodes, clusterState, failedShards, clusterInfoService.getClusterInfo(), currentNanoTime);\n boolean changed = false;\n // as failing primaries also fail associated replicas, we fail replicas first here so that their nodes are added to ignore list\n List<FailedRerouteAllocation.FailedShard> orderedFailedShards = new ArrayList<>(failedShards);\n@@ -225,21 +227,38 @@ public RoutingAllocation.Result applyFailedShards(ClusterState clusterState, Lis\n UnassignedInfo unassignedInfo = failedShard.shard.unassignedInfo();\n final int failedAllocations = unassignedInfo != null ? unassignedInfo.getNumFailedAllocations() : 0;\n changed |= applyFailedShard(allocation, failedShard.shard, true, new UnassignedInfo(UnassignedInfo.Reason.ALLOCATION_FAILED, failedShard.message, failedShard.failure,\n- failedAllocations + 1, System.nanoTime(), System.currentTimeMillis()));\n+ failedAllocations + 1, currentNanoTime, System.currentTimeMillis(), false));\n }\n if (!changed) {\n return new RoutingAllocation.Result(false, clusterState.routingTable(), clusterState.metaData());\n }\n gatewayAllocator.applyFailedShards(allocation);\n reroute(allocation);\n- final RoutingAllocation.Result result = buildChangedResult(clusterState.metaData(), clusterState.routingTable(), routingNodes);\n String failedShardsAsString = firstListElementsToCommaDelimitedString(failedShards, s -> s.shard.shardId().toString());\n- logClusterHealthStateChange(\n- new ClusterStateHealth(clusterState),\n- new ClusterStateHealth(clusterState.getMetaData(), result.routingTable()),\n- \"shards failed [\" + failedShardsAsString + \"] ...\"\n- );\n- return result;\n+ return buildResultAndLogHealthChange(allocation, \"shards failed [\" + failedShardsAsString + \"] ...\");\n+ }\n+\n+ /**\n+ * Removes delay markers from unassigned shards based on current time stamp. Returns true if markers were removed.\n+ */\n+ private boolean removeDelayMarkers(RoutingAllocation allocation) {\n+ final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = allocation.routingNodes().unassigned().iterator();\n+ final MetaData metaData = allocation.metaData();\n+ boolean changed = false;\n+ while (unassignedIterator.hasNext()) {\n+ ShardRouting shardRouting = unassignedIterator.next();\n+ UnassignedInfo unassignedInfo = shardRouting.unassignedInfo();\n+ if (unassignedInfo.isDelayed()) {\n+ final long newComputedLeftDelayNanos = unassignedInfo.getRemainingDelay(allocation.getCurrentNanoTime(),\n+ metaData.getIndexSafe(shardRouting.index()).getSettings());\n+ if (newComputedLeftDelayNanos == 0) {\n+ changed = true;\n+ unassignedIterator.updateUnassignedInfo(new UnassignedInfo(unassignedInfo.getReason(), unassignedInfo.getMessage(), unassignedInfo.getFailure(),\n+ unassignedInfo.getNumFailedAllocations(), unassignedInfo.getUnassignedTimeInNanos(), unassignedInfo.getUnassignedTimeInMillis(), false));\n+ }\n+ }\n+ }\n+ return changed;\n }\n \n /**\n@@ -276,13 +295,7 @@ public RoutingAllocation.Result reroute(ClusterState clusterState, AllocationCom\n // the assumption is that commands will move / act on shards (or fail through exceptions)\n // so, there will always be shard \"movements\", so no need to check on reroute\n reroute(allocation);\n- RoutingAllocation.Result result = buildChangedResult(clusterState.metaData(), clusterState.routingTable(), routingNodes, explanations);\n- logClusterHealthStateChange(\n- new ClusterStateHealth(clusterState),\n- new ClusterStateHealth(clusterState.getMetaData(), result.routingTable()),\n- \"reroute commands\"\n- );\n- return result;\n+ return buildResultAndLogHealthChange(allocation, \"reroute commands\", explanations);\n }\n \n \n@@ -310,13 +323,7 @@ protected RoutingAllocation.Result reroute(ClusterState clusterState, String rea\n if (!reroute(allocation)) {\n return new RoutingAllocation.Result(false, clusterState.routingTable(), clusterState.metaData());\n }\n- RoutingAllocation.Result result = buildChangedResult(clusterState.metaData(), clusterState.routingTable(), routingNodes);\n- logClusterHealthStateChange(\n- new ClusterStateHealth(clusterState),\n- new ClusterStateHealth(clusterState.getMetaData(), result.routingTable()),\n- reason\n- );\n- return result;\n+ return buildResultAndLogHealthChange(allocation, reason);\n }\n \n private void logClusterHealthStateChange(ClusterStateHealth previousStateHealth, ClusterStateHealth newStateHealth, String reason) {\n@@ -341,8 +348,7 @@ private boolean reroute(RoutingAllocation allocation) {\n \n // now allocate all the unassigned to available nodes\n if (allocation.routingNodes().unassigned().size() > 0) {\n- updateLeftDelayOfUnassignedShards(allocation, settings);\n-\n+ changed |= removeDelayMarkers(allocation);\n changed |= gatewayAllocator.allocateUnassigned(allocation);\n }\n \n@@ -351,22 +357,6 @@ private boolean reroute(RoutingAllocation allocation) {\n return changed;\n }\n \n- // public for testing\n- public static void updateLeftDelayOfUnassignedShards(RoutingAllocation allocation, Settings settings) {\n- final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = allocation.routingNodes().unassigned().iterator();\n- final MetaData metaData = allocation.metaData();\n- while (unassignedIterator.hasNext()) {\n- ShardRouting shardRouting = unassignedIterator.next();\n- final IndexMetaData indexMetaData = metaData.getIndexSafe(shardRouting.index());\n- UnassignedInfo previousUnassignedInfo = shardRouting.unassignedInfo();\n- UnassignedInfo updatedUnassignedInfo = previousUnassignedInfo.updateDelay(allocation.getCurrentNanoTime(), settings,\n- indexMetaData.getSettings());\n- if (updatedUnassignedInfo != previousUnassignedInfo) { // reference equality!\n- unassignedIterator.updateUnassignedInfo(updatedUnassignedInfo);\n- }\n- }\n- }\n-\n private boolean electPrimariesAndUnassignedDanglingReplicas(RoutingAllocation allocation) {\n boolean changed = false;\n final RoutingNodes routingNodes = allocation.routingNodes();\n@@ -436,8 +426,10 @@ private boolean deassociateDeadNodes(RoutingAllocation allocation) {\n changed = true;\n // now, go over all the shards routing on the node, and fail them\n for (ShardRouting shardRouting : node.copyShards()) {\n- UnassignedInfo unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.NODE_LEFT, \"node_left[\" + node.nodeId() + \"]\", null,\n- 0, allocation.getCurrentNanoTime(), System.currentTimeMillis());\n+ final IndexMetaData indexMetaData = allocation.metaData().getIndexSafe(shardRouting.index());\n+ boolean delayed = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexMetaData.getSettings()).nanos() > 0;\n+ UnassignedInfo unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.NODE_LEFT, \"node_left[\" + node.nodeId() + \"]\",\n+ null, 0, allocation.getCurrentNanoTime(), System.currentTimeMillis(), delayed);\n applyFailedShard(allocation, shardRouting, false, unassignedInfo);\n }\n // its a dead node, remove it, note, its important to remove it *after* we apply failed shard\n@@ -458,7 +450,7 @@ private boolean failReplicasForUnassignedPrimary(RoutingAllocation allocation, S\n for (ShardRouting routing : replicas) {\n changed |= applyFailedShard(allocation, routing, false,\n new UnassignedInfo(UnassignedInfo.Reason.PRIMARY_FAILED, \"primary failed while replica initializing\",\n- null, 0, allocation.getCurrentNanoTime(), System.currentTimeMillis()));\n+ null, 0, allocation.getCurrentNanoTime(), System.currentTimeMillis(), false));\n }\n return changed;\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java", "status": "modified" }, { "diff": "@@ -57,8 +57,8 @@ public String toString() {\n \n private final List<FailedShard> failedShards;\n \n- public FailedRerouteAllocation(AllocationDeciders deciders, RoutingNodes routingNodes, ClusterState clusterState, List<FailedShard> failedShards, ClusterInfo clusterInfo) {\n- super(deciders, routingNodes, clusterState, clusterInfo, System.nanoTime(), false);\n+ public FailedRerouteAllocation(AllocationDeciders deciders, RoutingNodes routingNodes, ClusterState clusterState, List<FailedShard> failedShards, ClusterInfo clusterInfo, long currentNanoTime) {\n+ super(deciders, routingNodes, clusterState, clusterInfo, currentNanoTime, false);\n this.failedShards = failedShards;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/FailedRerouteAllocation.java", "status": "modified" }, { "diff": "@@ -56,7 +56,7 @@ public static class Result {\n \n private final MetaData metaData;\n \n- private RoutingExplanations explanations = new RoutingExplanations();\n+ private final RoutingExplanations explanations;\n \n /**\n * Creates a new {@link RoutingAllocation.Result}\n@@ -65,9 +65,7 @@ public static class Result {\n * @param metaData the {@link MetaData} this Result references\n */\n public Result(boolean changed, RoutingTable routingTable, MetaData metaData) {\n- this.changed = changed;\n- this.routingTable = routingTable;\n- this.metaData = metaData;\n+ this(changed, routingTable, metaData, new RoutingExplanations());\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java", "status": "modified" }, { "diff": "@@ -35,8 +35,8 @@ public class StartedRerouteAllocation extends RoutingAllocation {\n \n private final List<? extends ShardRouting> startedShards;\n \n- public StartedRerouteAllocation(AllocationDeciders deciders, RoutingNodes routingNodes, ClusterState clusterState, List<? extends ShardRouting> startedShards, ClusterInfo clusterInfo) {\n- super(deciders, routingNodes, clusterState, clusterInfo, System.nanoTime(), false);\n+ public StartedRerouteAllocation(AllocationDeciders deciders, RoutingNodes routingNodes, ClusterState clusterState, List<? extends ShardRouting> startedShards, ClusterInfo clusterInfo, long currentNanoTime) {\n+ super(deciders, routingNodes, clusterState, clusterInfo, currentNanoTime, false);\n this.startedShards = startedShards;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/StartedRerouteAllocation.java", "status": "modified" }, { "diff": "@@ -125,7 +125,7 @@ public RerouteExplanation execute(RoutingAllocation allocation, boolean explain)\n // we need to move the unassigned info back to treat it as if it was index creation\n unassignedInfoToUpdate = new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED,\n \"force empty allocation from previous reason \" + shardRouting.unassignedInfo().getReason() + \", \" + shardRouting.unassignedInfo().getMessage(),\n- shardRouting.unassignedInfo().getFailure(), 0, System.nanoTime(), System.currentTimeMillis());\n+ shardRouting.unassignedInfo().getFailure(), 0, System.nanoTime(), System.currentTimeMillis(), false);\n }\n \n initializeUnassignedShard(allocation, routingNodes, routingNode, shardRouting, unassignedInfoToUpdate);", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateEmptyPrimaryAllocationCommand.java", "status": "modified" }, { "diff": "@@ -455,7 +455,7 @@ public TimeValue getMaxTaskWaitTime() {\n }\n \n /** asserts that the current thread is the cluster state update thread */\n- public boolean assertClusterStateThread() {\n+ public static boolean assertClusterStateThread() {\n assert Thread.currentThread().getName().contains(ClusterService.UPDATE_THREAD_NAME) :\n \"not called from the cluster state update thread\";\n return true;", "filename": "core/src/main/java/org/elasticsearch/cluster/service/ClusterService.java", "status": "modified" }, { "diff": "@@ -89,7 +89,7 @@ public void waitToBeElectedAsMaster(int requiredMasterJoins, TimeValue timeValue\n assert accumulateJoins.get() : \"waitToBeElectedAsMaster is called we are not accumulating joins\";\n \n final CountDownLatch done = new CountDownLatch(1);\n- final ElectionContext newContext = new ElectionContext(callback, requiredMasterJoins, clusterService) {\n+ final ElectionContext newContext = new ElectionContext(callback, requiredMasterJoins) {\n @Override\n void onClose() {\n if (electionContext.compareAndSet(this, null)) {\n@@ -307,24 +307,22 @@ public interface ElectionCallback {\n static abstract class ElectionContext implements ElectionCallback {\n private final ElectionCallback callback;\n private final int requiredMasterJoins;\n- private final ClusterService clusterService;\n \n /** set to true after enough joins have been seen and a cluster update task is submitted to become master */\n final AtomicBoolean pendingSetAsMasterTask = new AtomicBoolean();\n final AtomicBoolean closed = new AtomicBoolean();\n \n- ElectionContext(ElectionCallback callback, int requiredMasterJoins, ClusterService clusterService) {\n+ ElectionContext(ElectionCallback callback, int requiredMasterJoins) {\n this.callback = callback;\n this.requiredMasterJoins = requiredMasterJoins;\n- this.clusterService = clusterService;\n }\n \n abstract void onClose();\n \n @Override\n public void onElectedAsMaster(ClusterState state) {\n assert pendingSetAsMasterTask.get() : \"onElectedAsMaster called but pendingSetAsMasterTask is not set\";\n- assertClusterStateThread();\n+ ClusterService.assertClusterStateThread();\n assert state.nodes().isLocalNodeElectedMaster() : \"onElectedAsMaster called but local node is not master\";\n if (closed.compareAndSet(false, true)) {\n try {\n@@ -337,7 +335,7 @@ public void onElectedAsMaster(ClusterState state) {\n \n @Override\n public void onFailure(Throwable t) {\n- assertClusterStateThread();\n+ ClusterService.assertClusterStateThread();\n if (closed.compareAndSet(false, true)) {\n try {\n onClose();\n@@ -346,10 +344,6 @@ public void onFailure(Throwable t) {\n }\n }\n }\n-\n- private void assertClusterStateThread() {\n- assert clusterService instanceof ClusterService == false || ((ClusterService) clusterService).assertClusterStateThread();\n- }\n }\n \n ", "filename": "core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java", "status": "modified" }, { "diff": "@@ -1145,14 +1145,14 @@ public boolean joinThreadActive(Thread joinThread) {\n \n /** cleans any running joining thread and calls {@link #rejoin} */\n public ClusterState stopRunningThreadAndRejoin(ClusterState clusterState, String reason) {\n- assertClusterStateThread();\n+ ClusterService.assertClusterStateThread();\n currentJoinThread.set(null);\n return rejoin(clusterState, reason);\n }\n \n /** starts a new joining thread if there is no currently active one and join thread controlling is started */\n public void startNewThreadIfNotRunning() {\n- assertClusterStateThread();\n+ ClusterService.assertClusterStateThread();\n if (joinThreadActive()) {\n return;\n }\n@@ -1185,7 +1185,7 @@ public void run() {\n * If the given thread is not the currently running join thread, the command is ignored.\n */\n public void markThreadAsDoneAndStartNew(Thread joinThread) {\n- assertClusterStateThread();\n+ ClusterService.assertClusterStateThread();\n if (!markThreadAsDone(joinThread)) {\n return;\n }\n@@ -1194,7 +1194,7 @@ public void markThreadAsDoneAndStartNew(Thread joinThread) {\n \n /** marks the given joinThread as completed. Returns false if the supplied thread is not the currently active join thread */\n public boolean markThreadAsDone(Thread joinThread) {\n- assertClusterStateThread();\n+ ClusterService.assertClusterStateThread();\n return currentJoinThread.compareAndSet(joinThread, null);\n }\n \n@@ -1210,9 +1210,5 @@ public void start() {\n running.set(true);\n }\n \n- private void assertClusterStateThread() {\n- assert clusterService instanceof ClusterService == false || ((ClusterService) clusterService).assertClusterStateThread();\n- }\n-\n }\n }", "filename": "core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java", "status": "modified" }, { "diff": "@@ -108,7 +108,7 @@ public boolean processExistingRecoveries(RoutingAllocation allocation) {\n currentNode, nodeWithHighestMatch);\n it.moveToUnassigned(new UnassignedInfo(UnassignedInfo.Reason.REALLOCATED_REPLICA,\n \"existing allocation of replica to [\" + currentNode + \"] cancelled, sync id match found on node [\" + nodeWithHighestMatch + \"]\",\n- null, 0, allocation.getCurrentNanoTime(), System.currentTimeMillis()));\n+ null, 0, allocation.getCurrentNanoTime(), System.currentTimeMillis(), false));\n changed = true;\n }\n }\n@@ -179,7 +179,7 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n }\n } else if (matchingNodes.hasAnyData() == false) {\n // if we didn't manage to find *any* data (regardless of matching sizes), check if the allocation of the replica shard needs to be delayed\n- changed |= ignoreUnassignedIfDelayed(unassignedIterator, shard);\n+ ignoreUnassignedIfDelayed(unassignedIterator, shard);\n }\n }\n return changed;\n@@ -195,21 +195,16 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n *\n * @param unassignedIterator iterator over unassigned shards\n * @param shard the shard which might be delayed\n- * @return true iff allocation is delayed for this shard\n */\n- public boolean ignoreUnassignedIfDelayed(RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator, ShardRouting shard) {\n- // calculate delay and store it in UnassignedInfo to be used by RoutingService\n- long delay = shard.unassignedInfo().getLastComputedLeftDelayNanos();\n- if (delay > 0) {\n- logger.debug(\"[{}][{}]: delaying allocation of [{}] for [{}]\", shard.index(), shard.id(), shard, TimeValue.timeValueNanos(delay));\n+ public void ignoreUnassignedIfDelayed(RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator, ShardRouting shard) {\n+ if (shard.unassignedInfo().isDelayed()) {\n+ logger.debug(\"{}: allocation of [{}] is delayed\", shard.shardId(), shard);\n /**\n * mark it as changed, since we want to kick a publishing to schedule future allocation,\n * see {@link org.elasticsearch.cluster.routing.RoutingService#clusterChanged(ClusterChangedEvent)}).\n */\n unassignedIterator.removeAndIgnore();\n- return true;\n }\n- return false;\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/gateway/ReplicaShardAllocator.java", "status": "modified" }, { "diff": "@@ -21,19 +21,18 @@\n \n import org.apache.lucene.index.CorruptIndexException;\n import org.elasticsearch.ElasticsearchException;\n-import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresResponse;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.ShardRouting;\n-import org.elasticsearch.cluster.routing.ShardRoutingHelper;\n import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.DummyTransportAddress;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -43,10 +42,8 @@\n \n import java.io.IOException;\n import java.util.Arrays;\n-import java.util.Collections;\n import java.util.HashMap;\n import java.util.HashSet;\n-import java.util.List;\n import java.util.Map;\n import java.util.Set;\n \n@@ -61,13 +58,11 @@ public final class ClusterAllocationExplanationTests extends ESTestCase {\n private Index i = new Index(\"foo\", \"uuid\");\n private ShardRouting primaryShard = ShardRouting.newUnassigned(new ShardId(i, 0), null, true,\n new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n- private ShardRouting replicaShard = ShardRouting.newUnassigned(new ShardId(i, 0), null, false,\n- new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n private IndexMetaData indexMetaData = IndexMetaData.builder(\"foo\")\n .settings(Settings.builder()\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n .put(IndexMetaData.SETTING_INDEX_UUID, \"uuid\"))\n- .putActiveAllocationIds(0, new HashSet<String>(Arrays.asList(\"aid1\", \"aid2\")))\n+ .putActiveAllocationIds(0, Sets.newHashSet(\"aid1\", \"aid2\"))\n .numberOfShards(1)\n .numberOfReplicas(1)\n .build();\n@@ -80,7 +75,6 @@ public final class ClusterAllocationExplanationTests extends ESTestCase {\n noDecision.add(Decision.single(Decision.Type.NO, \"no label\", \"no thanks\"));\n }\n \n-\n private void assertExplanations(NodeExplanation ne, String finalExplanation, ClusterAllocationExplanation.FinalDecision finalDecision,\n ClusterAllocationExplanation.StoreCopy storeCopy) {\n assertEquals(finalExplanation, ne.getFinalExplanation());\n@@ -195,6 +189,7 @@ public void testDecisionEquality() {\n \n public void testExplanationSerialization() throws Exception {\n ShardId shard = new ShardId(\"test\", \"uuid\", 0);\n+ long allocationDelay = randomIntBetween(0, 500);\n long remainingDelay = randomIntBetween(0, 500);\n Map<DiscoveryNode, NodeExplanation> nodeExplanations = new HashMap<>(1);\n Float nodeWeight = randomFloat();\n@@ -207,7 +202,7 @@ public void testExplanationSerialization() throws Exception {\n yesDecision, nodeWeight, storeStatus, \"\", activeAllocationIds, false);\n nodeExplanations.put(ne.getNode(), ne);\n ClusterAllocationExplanation cae = new ClusterAllocationExplanation(shard, true,\n- \"assignedNode\", remainingDelay, null, false, nodeExplanations);\n+ \"assignedNode\", allocationDelay, remainingDelay, null, false, nodeExplanations);\n BytesStreamOutput out = new BytesStreamOutput();\n cae.writeTo(out);\n StreamInput in = StreamInput.wrap(out.bytes());\n@@ -217,6 +212,7 @@ public void testExplanationSerialization() throws Exception {\n assertTrue(cae2.isAssigned());\n assertEquals(\"assignedNode\", cae2.getAssignedNodeId());\n assertNull(cae2.getUnassignedInfo());\n+ assertEquals(allocationDelay, cae2.getAllocationDelayMillis());\n assertEquals(remainingDelay, cae2.getRemainingDelayMillis());\n for (Map.Entry<DiscoveryNode, NodeExplanation> entry : cae2.getNodeExplanations().entrySet()) {\n DiscoveryNode node = entry.getKey();\n@@ -230,7 +226,6 @@ public void testExplanationSerialization() throws Exception {\n \n public void testExplanationToXContent() throws Exception {\n ShardId shardId = new ShardId(\"foo\", \"uuid\", 0);\n- long remainingDelay = 42;\n Decision.Multi d = new Decision.Multi();\n d.add(Decision.single(Decision.Type.NO, \"no label\", \"because I said no\"));\n d.add(Decision.single(Decision.Type.YES, \"yes label\", \"yes please\"));\n@@ -245,7 +240,7 @@ public void testExplanationToXContent() throws Exception {\n Map<DiscoveryNode, NodeExplanation> nodeExplanations = new HashMap<>(1);\n nodeExplanations.put(ne.getNode(), ne);\n ClusterAllocationExplanation cae = new ClusterAllocationExplanation(shardId, true,\n- \"assignedNode\", remainingDelay, null, false, nodeExplanations);\n+ \"assignedNode\", 42, 42, null, false, nodeExplanations);\n XContentBuilder builder = XContentFactory.jsonBuilder();\n cae.toXContent(builder, ToXContent.EMPTY_PARAMS);\n assertEquals(\"{\\\"shard\\\":{\\\"index\\\":\\\"foo\\\",\\\"index_uuid\\\":\\\"uuid\\\",\\\"id\\\":0,\\\"primary\\\":true},\\\"assigned\\\":true,\" +", "filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplanationTests.java", "status": "modified" }, { "diff": "@@ -108,8 +108,6 @@ public void testDelayedAllocationTimesOut() throws Exception {\n * allocation to a very small value, it kicks the allocation of the unassigned shard\n * even though the node it was hosted on will not come back.\n */\n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/18293\")\n- @TestLogging(\"_root:DEBUG,cluster.routing:TRACE\")\n public void testDelayedAllocationChangeWithSettingTo100ms() throws Exception {\n internalCluster().startNodesAsync(3).get();\n prepareCreate(\"test\").setSettings(Settings.builder()\n@@ -145,12 +143,7 @@ public void testDelayedAllocationChangeWithSettingTo0() throws Exception {\n ensureGreen(\"test\");\n indexRandomData();\n internalCluster().stopRandomNode(InternalTestCluster.nameFilter(findNodeWithShard()));\n- assertBusy(new Runnable() {\n- @Override\n- public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().getRoutingNodes().unassigned().size() > 0, equalTo(true));\n- }\n- });\n+ assertBusy(() -> assertThat(client().admin().cluster().prepareState().all().get().getState().getRoutingNodes().unassigned().size() > 0, equalTo(true)));\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));\n assertAcked(client().admin().indices().prepareUpdateSettings(\"test\").setSettings(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), TimeValue.timeValueMillis(0))).get());\n ensureGreen(\"test\");", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/DelayedAllocationIT.java", "status": "modified" }, { "diff": "@@ -0,0 +1,510 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.routing;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterChangedEvent;\n+import org.elasticsearch.cluster.ClusterInfoService;\n+import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.ClusterStateUpdateTask;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocator;\n+import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;\n+import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.gateway.GatewayAllocator;\n+import org.elasticsearch.test.ESAllocationTestCase;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.junit.After;\n+import org.junit.Before;\n+\n+import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicLong;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+import static java.util.Collections.singleton;\n+import static org.elasticsearch.cluster.routing.DelayedAllocationService.CLUSTER_UPDATE_TASK_SOURCE;\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED;\n+import static org.elasticsearch.common.unit.TimeValue.timeValueMillis;\n+import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.mockito.Matchers.any;\n+import static org.mockito.Matchers.eq;\n+import static org.mockito.Mockito.doAnswer;\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.verify;\n+import static org.mockito.Mockito.verifyNoMoreInteractions;\n+\n+/**\n+ */\n+public class DelayedAllocationServiceTests extends ESAllocationTestCase {\n+\n+ private TestDelayAllocationService delayedAllocationService;\n+ private MockAllocationService allocationService;\n+ private ClusterService clusterService;\n+ private ThreadPool threadPool;\n+\n+ @Before\n+ public void createDelayedAllocationService() {\n+ threadPool = new ThreadPool(getTestName());\n+ clusterService = mock(ClusterService.class);\n+ allocationService = createAllocationService(Settings.EMPTY, new DelayedShardsMockGatewayAllocator());\n+ delayedAllocationService = new TestDelayAllocationService(Settings.EMPTY, threadPool, clusterService, allocationService);\n+ verify(clusterService).addFirst(delayedAllocationService);\n+ }\n+\n+ @After\n+ public void shutdownThreadPool() throws Exception {\n+ terminate(threadPool);\n+ }\n+\n+ public void testNoDelayedUnassigned() throws Exception {\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)\n+ .put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"0\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test\")).build()).build();\n+ clusterState = ClusterState.builder(clusterState)\n+ .nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).localNodeId(\"node1\").masterNodeId(\"node1\"))\n+ .build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocationService.reroute(clusterState, \"reroute\")).build();\n+ // starting primaries\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingResult(allocationService.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)))\n+ .build();\n+ // starting replicas\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingResult(allocationService.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)))\n+ .build();\n+ assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(false));\n+ ClusterState prevState = clusterState;\n+ // remove node2 and reroute\n+ DiscoveryNodes.Builder nodes = DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\");\n+ boolean nodeAvailableForAllocation = randomBoolean();\n+ if (nodeAvailableForAllocation) {\n+ nodes.put(newNode(\"node3\"));\n+ }\n+ clusterState = ClusterState.builder(clusterState).nodes(nodes).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocationService.reroute(clusterState, \"reroute\")).build();\n+ ClusterState newState = clusterState;\n+ List<ShardRouting> unassignedShards = newState.getRoutingTable().shardsWithState(ShardRoutingState.UNASSIGNED);\n+ if (nodeAvailableForAllocation) {\n+ assertThat(unassignedShards.size(), equalTo(0));\n+ } else {\n+ assertThat(unassignedShards.size(), equalTo(1));\n+ assertThat(unassignedShards.get(0).unassignedInfo().isDelayed(), equalTo(false));\n+ }\n+\n+ delayedAllocationService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n+ verifyNoMoreInteractions(clusterService);\n+ assertNull(delayedAllocationService.delayedRerouteTask.get());\n+ }\n+\n+ public void testDelayedUnassignedScheduleReroute() throws Exception {\n+ TimeValue delaySetting = timeValueMillis(100);\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)\n+ .put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), delaySetting))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test\")).build()).build();\n+ clusterState = ClusterState.builder(clusterState)\n+ .nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).localNodeId(\"node1\").masterNodeId(\"node1\"))\n+ .build();\n+ final long baseTimestampNanos = System.nanoTime();\n+ allocationService.setNanoTimeOverride(baseTimestampNanos);\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocationService.reroute(clusterState, \"reroute\")).build();\n+ // starting primaries\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingResult(allocationService.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)))\n+ .build();\n+ // starting replicas\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingResult(allocationService.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)))\n+ .build();\n+ assertFalse(\"no shards should be unassigned\", clusterState.getRoutingNodes().unassigned().size() > 0);\n+ String nodeId = null;\n+ final List<ShardRouting> allShards = clusterState.getRoutingNodes().routingTable().allShards(\"test\");\n+ // we need to find the node with the replica otherwise we will not reroute\n+ for (ShardRouting shardRouting : allShards) {\n+ if (shardRouting.primary() == false) {\n+ nodeId = shardRouting.currentNodeId();\n+ break;\n+ }\n+ }\n+ assertNotNull(nodeId);\n+\n+ // remove node that has replica and reroute\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(nodeId)).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocationService.reroute(clusterState, \"reroute\")).build();\n+ ClusterState stateWithDelayedShard = clusterState;\n+ // make sure the replica is marked as delayed (i.e. not reallocated)\n+ assertEquals(1, UnassignedInfo.getNumberOfDelayedUnassigned(stateWithDelayedShard));\n+ ShardRouting delayedShard = stateWithDelayedShard.getRoutingNodes().unassigned().iterator().next();\n+ assertEquals(baseTimestampNanos, delayedShard.unassignedInfo().getUnassignedTimeInNanos());\n+\n+ // mock ClusterService.submitStateUpdateTask() method\n+ CountDownLatch latch = new CountDownLatch(1);\n+ AtomicReference<ClusterStateUpdateTask> clusterStateUpdateTask = new AtomicReference<>();\n+ doAnswer(invocationOnMock -> {\n+ clusterStateUpdateTask.set((ClusterStateUpdateTask)invocationOnMock.getArguments()[1]);\n+ latch.countDown();\n+ return null;\n+ }).when(clusterService).submitStateUpdateTask(eq(CLUSTER_UPDATE_TASK_SOURCE), any(ClusterStateUpdateTask.class));\n+ assertNull(delayedAllocationService.delayedRerouteTask.get());\n+ long delayUntilClusterChangeEvent = TimeValue.timeValueNanos(randomInt((int)delaySetting.nanos() - 1)).nanos();\n+ long clusterChangeEventTimestampNanos = baseTimestampNanos + delayUntilClusterChangeEvent;\n+ delayedAllocationService.setNanoTimeOverride(clusterChangeEventTimestampNanos);\n+ delayedAllocationService.clusterChanged(new ClusterChangedEvent(\"fake node left\", stateWithDelayedShard, clusterState));\n+\n+ // check that delayed reroute task was created and registered with the proper settings\n+ DelayedAllocationService.DelayedRerouteTask delayedRerouteTask = delayedAllocationService.delayedRerouteTask.get();\n+ assertNotNull(delayedRerouteTask);\n+ assertFalse(delayedRerouteTask.cancelScheduling.get());\n+ assertThat(delayedRerouteTask.baseTimestampNanos, equalTo(clusterChangeEventTimestampNanos));\n+ assertThat(delayedRerouteTask.nextDelay.nanos(),\n+ equalTo(delaySetting.nanos() - (clusterChangeEventTimestampNanos - baseTimestampNanos)));\n+\n+ // check that submitStateUpdateTask() was invoked on the cluster service mock\n+ assertTrue(latch.await(30, TimeUnit.SECONDS));\n+ verify(clusterService).submitStateUpdateTask(eq(CLUSTER_UPDATE_TASK_SOURCE), eq(clusterStateUpdateTask.get()));\n+\n+ // advance the time on the allocation service to a timestamp that happened after the delayed scheduling\n+ long nanoTimeForReroute = clusterChangeEventTimestampNanos + delaySetting.nanos() + timeValueMillis(randomInt(200)).nanos();\n+ allocationService.setNanoTimeOverride(nanoTimeForReroute);\n+ // apply cluster state\n+ ClusterState stateWithRemovedDelay = clusterStateUpdateTask.get().execute(stateWithDelayedShard);\n+ // check that shard is not delayed anymore\n+ assertEquals(0, UnassignedInfo.getNumberOfDelayedUnassigned(stateWithRemovedDelay));\n+ // check that task is now removed\n+ assertNull(delayedAllocationService.delayedRerouteTask.get());\n+\n+ // simulate calling listener (cluster change event)\n+ delayedAllocationService.setNanoTimeOverride(nanoTimeForReroute + timeValueMillis(randomInt(200)).nanos());\n+ delayedAllocationService.clusterChanged(\n+ new ClusterChangedEvent(CLUSTER_UPDATE_TASK_SOURCE, stateWithRemovedDelay, stateWithDelayedShard));\n+ // check that no new task is scheduled\n+ assertNull(delayedAllocationService.delayedRerouteTask.get());\n+ // check that no further cluster state update was submitted\n+ verifyNoMoreInteractions(clusterService);\n+ }\n+\n+ /**\n+ * This tests that a new delayed reroute is scheduled right after a delayed reroute was run\n+ */\n+ public void testDelayedUnassignedScheduleRerouteAfterDelayedReroute() throws Exception {\n+ TimeValue shortDelaySetting = timeValueMillis(100);\n+ TimeValue longDelaySetting = TimeValue.timeValueSeconds(1);\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"short_delay\")\n+ .settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), shortDelaySetting))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .put(IndexMetaData.builder(\"long_delay\")\n+ .settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), longDelaySetting))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"short_delay\")).addAsNew(metaData.index(\"long_delay\")).build())\n+ .nodes(DiscoveryNodes.builder()\n+ .put(newNode(\"node0\", singleton(DiscoveryNode.Role.MASTER))).localNodeId(\"node0\").masterNodeId(\"node0\")\n+ .put(newNode(\"node1\")).put(newNode(\"node2\")).put(newNode(\"node3\")).put(newNode(\"node4\"))).build();\n+ // allocate shards\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocationService.reroute(clusterState, \"reroute\")).build();\n+ // start primaries\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingResult(allocationService.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)))\n+ .build();\n+ // start replicas\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingResult(allocationService.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)))\n+ .build();\n+ assertThat(\"all shards should be started\", clusterState.getRoutingNodes().shardsWithState(STARTED).size(), equalTo(4));\n+\n+ // find replica of short_delay\n+ ShardRouting shortDelayReplica = null;\n+ for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"short_delay\")) {\n+ if (shardRouting.primary() == false) {\n+ shortDelayReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(shortDelayReplica);\n+\n+ // find replica of long_delay\n+ ShardRouting longDelayReplica = null;\n+ for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"long_delay\")) {\n+ if (shardRouting.primary() == false) {\n+ longDelayReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(longDelayReplica);\n+\n+ final long baseTimestampNanos = System.nanoTime();\n+\n+ // remove node of shortDelayReplica and node of longDelayReplica and reroute\n+ ClusterState clusterStateBeforeNodeLeft = clusterState;\n+ clusterState = ClusterState.builder(clusterState)\n+ .nodes(DiscoveryNodes.builder(clusterState.nodes())\n+ .remove(shortDelayReplica.currentNodeId())\n+ .remove(longDelayReplica.currentNodeId()))\n+ .build();\n+ // make sure both replicas are marked as delayed (i.e. not reallocated)\n+ allocationService.setNanoTimeOverride(baseTimestampNanos);\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocationService.reroute(clusterState, \"reroute\")).build();\n+ final ClusterState stateWithDelayedShards = clusterState;\n+ assertEquals(2, UnassignedInfo.getNumberOfDelayedUnassigned(stateWithDelayedShards));\n+ RoutingNodes.UnassignedShards.UnassignedIterator iter = stateWithDelayedShards.getRoutingNodes().unassigned().iterator();\n+ assertEquals(baseTimestampNanos, iter.next().unassignedInfo().getUnassignedTimeInNanos());\n+ assertEquals(baseTimestampNanos, iter.next().unassignedInfo().getUnassignedTimeInNanos());\n+\n+ // mock ClusterService.submitStateUpdateTask() method\n+ CountDownLatch latch1 = new CountDownLatch(1);\n+ AtomicReference<ClusterStateUpdateTask> clusterStateUpdateTask1 = new AtomicReference<>();\n+ doAnswer(invocationOnMock -> {\n+ clusterStateUpdateTask1.set((ClusterStateUpdateTask)invocationOnMock.getArguments()[1]);\n+ latch1.countDown();\n+ return null;\n+ }).when(clusterService).submitStateUpdateTask(eq(CLUSTER_UPDATE_TASK_SOURCE), any(ClusterStateUpdateTask.class));\n+ assertNull(delayedAllocationService.delayedRerouteTask.get());\n+ long delayUntilClusterChangeEvent = TimeValue.timeValueNanos(randomInt((int)shortDelaySetting.nanos() - 1)).nanos();\n+ long clusterChangeEventTimestampNanos = baseTimestampNanos + delayUntilClusterChangeEvent;\n+ delayedAllocationService.setNanoTimeOverride(clusterChangeEventTimestampNanos);\n+ delayedAllocationService.clusterChanged(\n+ new ClusterChangedEvent(\"fake node left\", stateWithDelayedShards, clusterStateBeforeNodeLeft));\n+\n+ // check that delayed reroute task was created and registered with the proper settings\n+ DelayedAllocationService.DelayedRerouteTask firstDelayedRerouteTask = delayedAllocationService.delayedRerouteTask.get();\n+ assertNotNull(firstDelayedRerouteTask);\n+ assertFalse(firstDelayedRerouteTask.cancelScheduling.get());\n+ assertThat(firstDelayedRerouteTask.baseTimestampNanos, equalTo(clusterChangeEventTimestampNanos));\n+ assertThat(firstDelayedRerouteTask.nextDelay.nanos(),\n+ equalTo(UnassignedInfo.findNextDelayedAllocation(clusterChangeEventTimestampNanos, stateWithDelayedShards)));\n+ assertThat(firstDelayedRerouteTask.nextDelay.nanos(),\n+ equalTo(shortDelaySetting.nanos() - (clusterChangeEventTimestampNanos - baseTimestampNanos)));\n+\n+ // check that submitStateUpdateTask() was invoked on the cluster service mock\n+ assertTrue(latch1.await(30, TimeUnit.SECONDS));\n+ verify(clusterService).submitStateUpdateTask(eq(CLUSTER_UPDATE_TASK_SOURCE), eq(clusterStateUpdateTask1.get()));\n+\n+ // advance the time on the allocation service to a timestamp that happened after the delayed scheduling\n+ long nanoTimeForReroute = clusterChangeEventTimestampNanos + shortDelaySetting.nanos() + timeValueMillis(randomInt(50)).nanos();\n+ allocationService.setNanoTimeOverride(nanoTimeForReroute);\n+ // apply cluster state\n+ ClusterState stateWithOnlyOneDelayedShard = clusterStateUpdateTask1.get().execute(stateWithDelayedShards);\n+ // check that shard is not delayed anymore\n+ assertEquals(1, UnassignedInfo.getNumberOfDelayedUnassigned(stateWithOnlyOneDelayedShard));\n+ // check that task is now removed\n+ assertNull(delayedAllocationService.delayedRerouteTask.get());\n+\n+ // mock ClusterService.submitStateUpdateTask() method again\n+ CountDownLatch latch2 = new CountDownLatch(1);\n+ AtomicReference<ClusterStateUpdateTask> clusterStateUpdateTask2 = new AtomicReference<>();\n+ doAnswer(invocationOnMock -> {\n+ clusterStateUpdateTask2.set((ClusterStateUpdateTask)invocationOnMock.getArguments()[1]);\n+ latch2.countDown();\n+ return null;\n+ }).when(clusterService).submitStateUpdateTask(eq(CLUSTER_UPDATE_TASK_SOURCE), any(ClusterStateUpdateTask.class));\n+ // simulate calling listener (cluster change event)\n+ delayUntilClusterChangeEvent = timeValueMillis(randomInt(50)).nanos();\n+ clusterChangeEventTimestampNanos = nanoTimeForReroute + delayUntilClusterChangeEvent;\n+ delayedAllocationService.setNanoTimeOverride(clusterChangeEventTimestampNanos);\n+ delayedAllocationService.clusterChanged(\n+ new ClusterChangedEvent(CLUSTER_UPDATE_TASK_SOURCE, stateWithOnlyOneDelayedShard, stateWithDelayedShards));\n+\n+ // check that new delayed reroute task was created and registered with the proper settings\n+ DelayedAllocationService.DelayedRerouteTask secondDelayedRerouteTask = delayedAllocationService.delayedRerouteTask.get();\n+ assertNotNull(secondDelayedRerouteTask);\n+ assertFalse(secondDelayedRerouteTask.cancelScheduling.get());\n+ assertThat(secondDelayedRerouteTask.baseTimestampNanos, equalTo(clusterChangeEventTimestampNanos));\n+ assertThat(secondDelayedRerouteTask.nextDelay.nanos(),\n+ equalTo(UnassignedInfo.findNextDelayedAllocation(clusterChangeEventTimestampNanos, stateWithOnlyOneDelayedShard)));\n+ assertThat(secondDelayedRerouteTask.nextDelay.nanos(),\n+ equalTo(longDelaySetting.nanos() - (clusterChangeEventTimestampNanos - baseTimestampNanos)));\n+\n+ // check that submitStateUpdateTask() was invoked on the cluster service mock\n+ assertTrue(latch2.await(30, TimeUnit.SECONDS));\n+ verify(clusterService).submitStateUpdateTask(eq(CLUSTER_UPDATE_TASK_SOURCE), eq(clusterStateUpdateTask2.get()));\n+\n+ // advance the time on the allocation service to a timestamp that happened after the delayed scheduling\n+ nanoTimeForReroute = clusterChangeEventTimestampNanos + longDelaySetting.nanos() + timeValueMillis(randomInt(50)).nanos();\n+ allocationService.setNanoTimeOverride(nanoTimeForReroute);\n+ // apply cluster state\n+ ClusterState stateWithNoDelayedShards = clusterStateUpdateTask2.get().execute(stateWithOnlyOneDelayedShard);\n+ // check that shard is not delayed anymore\n+ assertEquals(0, UnassignedInfo.getNumberOfDelayedUnassigned(stateWithNoDelayedShards));\n+ // check that task is now removed\n+ assertNull(delayedAllocationService.delayedRerouteTask.get());\n+\n+ // simulate calling listener (cluster change event)\n+ delayedAllocationService.setNanoTimeOverride(nanoTimeForReroute + timeValueMillis(randomInt(50)).nanos());\n+ delayedAllocationService.clusterChanged(\n+ new ClusterChangedEvent(CLUSTER_UPDATE_TASK_SOURCE, stateWithNoDelayedShards, stateWithOnlyOneDelayedShard));\n+ // check that no new task is scheduled\n+ assertNull(delayedAllocationService.delayedRerouteTask.get());\n+ // check that no further cluster state update was submitted\n+ verifyNoMoreInteractions(clusterService);\n+ }\n+\n+ public void testDelayedUnassignedScheduleRerouteRescheduledOnShorterDelay() throws Exception {\n+ TimeValue delaySetting = timeValueSeconds(30);\n+ TimeValue shorterDelaySetting = timeValueMillis(100);\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"foo\").settings(settings(Version.CURRENT)\n+ .put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), delaySetting))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .put(IndexMetaData.builder(\"bar\").settings(settings(Version.CURRENT)\n+ .put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), shorterDelaySetting))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(RoutingTable.builder()\n+ .addAsNew(metaData.index(\"foo\"))\n+ .addAsNew(metaData.index(\"bar\"))\n+ .build()).build();\n+ clusterState = ClusterState.builder(clusterState)\n+ .nodes(DiscoveryNodes.builder()\n+ .put(newNode(\"node1\")).put(newNode(\"node2\")).put(newNode(\"node3\")).put(newNode(\"node4\"))\n+ .localNodeId(\"node1\").masterNodeId(\"node1\"))\n+ .build();\n+ final long nodeLeftTimestampNanos = System.nanoTime();\n+ allocationService.setNanoTimeOverride(nodeLeftTimestampNanos);\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocationService.reroute(clusterState, \"reroute\")).build();\n+ // starting primaries\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingResult(allocationService.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)))\n+ .build();\n+ // starting replicas\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingResult(allocationService.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)))\n+ .build();\n+ assertFalse(\"no shards should be unassigned\", clusterState.getRoutingNodes().unassigned().size() > 0);\n+ String nodeIdOfFooReplica = null;\n+ for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"foo\")) {\n+ if (shardRouting.primary() == false) {\n+ nodeIdOfFooReplica = shardRouting.currentNodeId();\n+ break;\n+ }\n+ }\n+ assertNotNull(nodeIdOfFooReplica);\n+\n+ // remove node that has replica and reroute\n+ clusterState = ClusterState.builder(clusterState).nodes(\n+ DiscoveryNodes.builder(clusterState.nodes()).remove(nodeIdOfFooReplica)).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocationService.reroute(clusterState, \"fake node left\")).build();\n+ ClusterState stateWithDelayedShard = clusterState;\n+ // make sure the replica is marked as delayed (i.e. not reallocated)\n+ assertEquals(1, UnassignedInfo.getNumberOfDelayedUnassigned(stateWithDelayedShard));\n+ ShardRouting delayedShard = stateWithDelayedShard.getRoutingNodes().unassigned().iterator().next();\n+ assertEquals(nodeLeftTimestampNanos, delayedShard.unassignedInfo().getUnassignedTimeInNanos());\n+\n+ assertNull(delayedAllocationService.delayedRerouteTask.get());\n+ long delayUntilClusterChangeEvent = TimeValue.timeValueNanos(randomInt((int)shorterDelaySetting.nanos() - 1)).nanos();\n+ long clusterChangeEventTimestampNanos = nodeLeftTimestampNanos + delayUntilClusterChangeEvent;\n+ delayedAllocationService.setNanoTimeOverride(clusterChangeEventTimestampNanos);\n+ delayedAllocationService.clusterChanged(new ClusterChangedEvent(\"fake node left\", stateWithDelayedShard, clusterState));\n+\n+ // check that delayed reroute task was created and registered with the proper settings\n+ DelayedAllocationService.DelayedRerouteTask delayedRerouteTask = delayedAllocationService.delayedRerouteTask.get();\n+ assertNotNull(delayedRerouteTask);\n+ assertFalse(delayedRerouteTask.cancelScheduling.get());\n+ assertThat(delayedRerouteTask.baseTimestampNanos, equalTo(clusterChangeEventTimestampNanos));\n+ assertThat(delayedRerouteTask.nextDelay.nanos(),\n+ equalTo(delaySetting.nanos() - (clusterChangeEventTimestampNanos - nodeLeftTimestampNanos)));\n+\n+ if (randomBoolean()) {\n+ // update settings with shorter delay\n+ ClusterState stateWithShorterDelay = ClusterState.builder(stateWithDelayedShard).metaData(MetaData.builder(\n+ stateWithDelayedShard.metaData()).updateSettings(Settings.builder().put(\n+ UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), shorterDelaySetting).build(), \"foo\")).build();\n+ delayedAllocationService.setNanoTimeOverride(clusterChangeEventTimestampNanos);\n+ delayedAllocationService.clusterChanged(\n+ new ClusterChangedEvent(\"apply shorter delay\", stateWithShorterDelay, stateWithDelayedShard));\n+ } else {\n+ // node leaves with replica shard of index bar that has shorter delay\n+ String nodeIdOfBarReplica = null;\n+ for (ShardRouting shardRouting : stateWithDelayedShard.getRoutingNodes().routingTable().allShards(\"bar\")) {\n+ if (shardRouting.primary() == false) {\n+ nodeIdOfBarReplica = shardRouting.currentNodeId();\n+ break;\n+ }\n+ }\n+ assertNotNull(nodeIdOfBarReplica);\n+\n+ // remove node that has replica and reroute\n+ clusterState = ClusterState.builder(stateWithDelayedShard).nodes(\n+ DiscoveryNodes.builder(stateWithDelayedShard.nodes()).remove(nodeIdOfBarReplica)).build();\n+ ClusterState stateWithShorterDelay = ClusterState.builder(clusterState).routingResult(\n+ allocationService.reroute(clusterState, \"fake node left\")).build();\n+ delayedAllocationService.setNanoTimeOverride(clusterChangeEventTimestampNanos);\n+ delayedAllocationService.clusterChanged(\n+ new ClusterChangedEvent(\"fake node left\", stateWithShorterDelay, stateWithDelayedShard));\n+ }\n+\n+ // check that delayed reroute task was replaced by shorter reroute task\n+ DelayedAllocationService.DelayedRerouteTask shorterDelayedRerouteTask = delayedAllocationService.delayedRerouteTask.get();\n+ assertNotNull(shorterDelayedRerouteTask);\n+ assertNotEquals(shorterDelayedRerouteTask, delayedRerouteTask);\n+ assertTrue(delayedRerouteTask.cancelScheduling.get()); // existing task was cancelled\n+ assertFalse(shorterDelayedRerouteTask.cancelScheduling.get());\n+ assertThat(delayedRerouteTask.baseTimestampNanos, equalTo(clusterChangeEventTimestampNanos));\n+ assertThat(shorterDelayedRerouteTask.nextDelay.nanos(),\n+ equalTo(shorterDelaySetting.nanos() - (clusterChangeEventTimestampNanos - nodeLeftTimestampNanos)));\n+ }\n+\n+ private static class TestDelayAllocationService extends DelayedAllocationService {\n+ private volatile long nanoTimeOverride = -1L;\n+\n+ public TestDelayAllocationService(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n+ AllocationService allocationService) {\n+ super(settings, threadPool, clusterService, allocationService);\n+ }\n+\n+ @Override\n+ protected void assertClusterStateThread() {\n+ // do not check this in the unit tests\n+ }\n+\n+ public void setNanoTimeOverride(long nanoTime) {\n+ this.nanoTimeOverride = nanoTime;\n+ }\n+\n+ @Override\n+ protected long currentNanoTime() {\n+ return nanoTimeOverride == -1L ? super.currentNanoTime() : nanoTimeOverride;\n+ }\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/DelayedAllocationServiceTests.java", "status": "added" }, { "diff": "@@ -19,32 +19,12 @@\n \n package org.elasticsearch.cluster.routing;\n \n-import org.elasticsearch.Version;\n-import org.elasticsearch.cluster.ClusterChangedEvent;\n-import org.elasticsearch.cluster.ClusterName;\n-import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n-import org.elasticsearch.cluster.metadata.MetaData;\n-import org.elasticsearch.cluster.node.DiscoveryNode;\n-import org.elasticsearch.cluster.node.DiscoveryNodes;\n-import org.elasticsearch.cluster.routing.allocation.AllocationService;\n-import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.test.ESAllocationTestCase;\n-import org.elasticsearch.threadpool.ThreadPool;\n-import org.junit.After;\n import org.junit.Before;\n \n-import java.util.List;\n-import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n-import static java.util.Collections.singleton;\n-import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n-import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED;\n-import static org.elasticsearch.test.ClusterServiceUtils.createClusterService;\n-import static org.elasticsearch.test.ClusterServiceUtils.setState;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n@@ -58,191 +38,18 @@ public void createRoutingService() {\n routingService = new TestRoutingService();\n }\n \n- @After\n- public void shutdownRoutingService() throws Exception {\n- routingService.shutdown();\n- }\n-\n public void testReroute() {\n assertThat(routingService.hasReroutedAndClear(), equalTo(false));\n routingService.reroute(\"test\");\n assertThat(routingService.hasReroutedAndClear(), equalTo(true));\n }\n \n- public void testNoDelayedUnassigned() throws Exception {\n- AllocationService allocation = createAllocationService(Settings.EMPTY, new DelayedShardsMockGatewayAllocator());\n- MetaData metaData = MetaData.builder()\n- .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"0\"))\n- .numberOfShards(1).numberOfReplicas(1))\n- .build();\n- ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n- .metaData(metaData)\n- .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test\")).build()).build();\n- clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).localNodeId(\"node1\").masterNodeId(\"node1\")).build();\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState, \"reroute\")).build();\n- // starting primaries\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- // starting replicas\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(false));\n- // remove node2 and reroute\n- ClusterState prevState = clusterState;\n- clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState, \"reroute\")).build();\n- ClusterState newState = clusterState;\n-\n- assertThat(routingService.getMinDelaySettingAtLastSchedulingNanos(), equalTo(Long.MAX_VALUE));\n- routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n- assertThat(routingService.getMinDelaySettingAtLastSchedulingNanos(), equalTo(Long.MAX_VALUE));\n- assertThat(routingService.hasReroutedAndClear(), equalTo(false));\n- }\n-\n- public void testDelayedUnassignedScheduleReroute() throws Exception {\n- MockAllocationService allocation = createAllocationService(Settings.EMPTY, new DelayedShardsMockGatewayAllocator());\n- MetaData metaData = MetaData.builder()\n- .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"100ms\"))\n- .numberOfShards(1).numberOfReplicas(1))\n- .build();\n- ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n- .metaData(metaData)\n- .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test\")).build()).build();\n- clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).localNodeId(\"node1\").masterNodeId(\"node1\")).build();\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState, \"reroute\")).build();\n- // starting primaries\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- // starting replicas\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertFalse(\"no shards should be unassigned\", clusterState.getRoutingNodes().unassigned().size() > 0);\n- String nodeId = null;\n- final List<ShardRouting> allShards = clusterState.getRoutingNodes().routingTable().allShards(\"test\");\n- // we need to find the node with the replica otherwise we will not reroute\n- for (ShardRouting shardRouting : allShards) {\n- if (shardRouting.primary() == false) {\n- nodeId = shardRouting.currentNodeId();\n- break;\n- }\n- }\n- assertNotNull(nodeId);\n-\n- // remove nodeId and reroute\n- ClusterState prevState = clusterState;\n- clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(nodeId)).build();\n- // make sure the replica is marked as delayed (i.e. not reallocated)\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState, \"reroute\")).build();\n- assertEquals(1, clusterState.getRoutingNodes().unassigned().size());\n-\n- ClusterState newState = clusterState;\n- routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n- assertBusy(() -> assertTrue(\"routing service should have run a reroute\", routingService.hasReroutedAndClear()));\n- // verify the registration has been reset\n- assertThat(routingService.getMinDelaySettingAtLastSchedulingNanos(), equalTo(Long.MAX_VALUE));\n- }\n-\n- /**\n- * This tests that a new delayed reroute is scheduled right after a delayed reroute was run\n- */\n- public void testDelayedUnassignedScheduleRerouteAfterDelayedReroute() throws Exception {\n- final ThreadPool testThreadPool = new ThreadPool(getTestName());\n- ClusterService clusterService = createClusterService(testThreadPool);\n- try {\n- MockAllocationService allocation = createAllocationService(Settings.EMPTY, new DelayedShardsMockGatewayAllocator());\n- MetaData metaData = MetaData.builder()\n- .put(IndexMetaData.builder(\"short_delay\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"100ms\"))\n- .numberOfShards(1).numberOfReplicas(1))\n- .put(IndexMetaData.builder(\"long_delay\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"10s\"))\n- .numberOfShards(1).numberOfReplicas(1))\n- .build();\n- ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData)\n- .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"short_delay\")).addAsNew(metaData.index(\"long_delay\")).build())\n- .nodes(DiscoveryNodes.builder()\n- .put(newNode(\"node0\", singleton(DiscoveryNode.Role.MASTER))).localNodeId(\"node0\").masterNodeId(\"node0\")\n- .put(newNode(\"node1\")).put(newNode(\"node2\")).put(newNode(\"node3\")).put(newNode(\"node4\"))).build();\n- // allocate shards\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState, \"reroute\")).build();\n- // start primaries\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- // start replicas\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(\"all shards should be started\", clusterState.getRoutingNodes().shardsWithState(STARTED).size(), equalTo(4));\n-\n- // find replica of short_delay\n- ShardRouting shortDelayReplica = null;\n- for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"short_delay\")) {\n- if (shardRouting.primary() == false) {\n- shortDelayReplica = shardRouting;\n- break;\n- }\n- }\n- assertNotNull(shortDelayReplica);\n-\n- // find replica of long_delay\n- ShardRouting longDelayReplica = null;\n- for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"long_delay\")) {\n- if (shardRouting.primary() == false) {\n- longDelayReplica = shardRouting;\n- break;\n- }\n- }\n- assertNotNull(longDelayReplica);\n-\n- final long baseTime = System.nanoTime();\n-\n- // remove node of shortDelayReplica and node of longDelayReplica and reroute\n- ClusterState prevState = clusterState;\n- clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(shortDelayReplica.currentNodeId()).remove(longDelayReplica.currentNodeId())).build();\n- // make sure both replicas are marked as delayed (i.e. not reallocated)\n- allocation.setNanoTimeOverride(baseTime);\n- clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState, \"reroute\")).build();\n-\n- // check that shortDelayReplica and longDelayReplica have been marked unassigned\n- RoutingNodes.UnassignedShards unassigned = clusterState.getRoutingNodes().unassigned();\n- assertEquals(2, unassigned.size());\n- // update shortDelayReplica and longDelayReplica variables with new shard routing\n- ShardRouting shortDelayUnassignedReplica = null;\n- ShardRouting longDelayUnassignedReplica = null;\n- for (ShardRouting shr : unassigned) {\n- if (shr.getIndexName().equals(\"short_delay\")) {\n- shortDelayUnassignedReplica = shr;\n- } else {\n- longDelayUnassignedReplica = shr;\n- }\n- }\n- assertTrue(shortDelayReplica.isSameShard(shortDelayUnassignedReplica));\n- assertTrue(longDelayReplica.isSameShard(longDelayUnassignedReplica));\n-\n- // manually trigger a clusterChanged event on routingService\n- ClusterState newState = clusterState;\n- setState(clusterService, newState);\n- // create routing service, also registers listener on cluster service\n- RoutingService routingService = new RoutingService(Settings.EMPTY, testThreadPool, clusterService, allocation);\n- routingService.start(); // just so performReroute does not prematurely return\n- // next (delayed) reroute should only delay longDelayReplica/longDelayUnassignedReplica, simulate that we are now 1 second after shards became unassigned\n- allocation.setNanoTimeOverride(baseTime + TimeValue.timeValueSeconds(1).nanos());\n- // register listener on cluster state so we know when cluster state has been changed\n- CountDownLatch latch = new CountDownLatch(1);\n- clusterService.addLast(event -> latch.countDown());\n- // instead of clusterService calling clusterChanged, we call it directly here\n- routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n- // cluster service should have updated state and called routingService with clusterChanged\n- latch.await();\n- // verify the registration has been set to the delay of longDelayReplica/longDelayUnassignedReplica\n- assertThat(routingService.getMinDelaySettingAtLastSchedulingNanos(), equalTo(TimeValue.timeValueSeconds(10).nanos()));\n- } finally {\n- clusterService.stop();\n- terminate(testThreadPool);\n- }\n- }\n-\n private class TestRoutingService extends RoutingService {\n \n private AtomicBoolean rerouted = new AtomicBoolean();\n \n public TestRoutingService() {\n- super(Settings.EMPTY, new ThreadPool(getTestName()), null, null);\n- }\n-\n- void shutdown() throws Exception {\n- terminate(threadPool);\n+ super(Settings.EMPTY, null, null);\n }\n \n public boolean hasReroutedAndClear() {", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java", "status": "modified" }, { "diff": "@@ -38,7 +38,6 @@\n import org.elasticsearch.test.ESAllocationTestCase;\n \n import java.util.Collections;\n-import java.util.EnumSet;\n \n import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED;\n@@ -75,7 +74,7 @@ public void testReasonOrdinalOrder() {\n public void testSerialization() throws Exception {\n UnassignedInfo.Reason reason = RandomPicks.randomFrom(random(), UnassignedInfo.Reason.values());\n UnassignedInfo meta = reason == UnassignedInfo.Reason.ALLOCATION_FAILED ?\n- new UnassignedInfo(reason, randomBoolean() ? randomAsciiOfLength(4) : null, null, randomIntBetween(1, 100), System.nanoTime(), System.currentTimeMillis()):\n+ new UnassignedInfo(reason, randomBoolean() ? randomAsciiOfLength(4) : null, null, randomIntBetween(1, 100), System.nanoTime(), System.currentTimeMillis(), false):\n new UnassignedInfo(reason, randomBoolean() ? randomAsciiOfLength(4) : null);\n BytesStreamOutput out = new BytesStreamOutput();\n meta.writeTo(out);\n@@ -262,59 +261,20 @@ public void testFailedShard() {\n /**\n * Verifies that delayed allocation calculation are correct.\n */\n- public void testUnassignedDelayedOnlyOnNodeLeft() throws Exception {\n- UnassignedInfo unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.NODE_LEFT, null);\n- unassignedInfo = unassignedInfo.updateDelay(unassignedInfo.getUnassignedTimeInNanos() + 1, // add 1 tick delay\n- Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"10h\").build(), Settings.EMPTY);\n- long delay = unassignedInfo.getLastComputedLeftDelayNanos();\n- long cachedDelay = unassignedInfo.getLastComputedLeftDelayNanos();\n- assertThat(delay, equalTo(cachedDelay));\n- assertThat(delay, equalTo(TimeValue.timeValueHours(10).nanos() - 1));\n- }\n-\n- /**\n- * Verifies that delayed allocation is only computed when the reason is NODE_LEFT.\n- */\n- public void testUnassignedDelayOnlyNodeLeftNonNodeLeftReason() throws Exception {\n- EnumSet<UnassignedInfo.Reason> reasons = EnumSet.allOf(UnassignedInfo.Reason.class);\n- reasons.remove(UnassignedInfo.Reason.NODE_LEFT);\n- UnassignedInfo.Reason reason = RandomPicks.randomFrom(random(), reasons);\n- UnassignedInfo unassignedInfo = reason == UnassignedInfo.Reason.ALLOCATION_FAILED ?\n- new UnassignedInfo(reason, null, null, 1, System.nanoTime(), System.currentTimeMillis()):\n- new UnassignedInfo(reason, null);\n- unassignedInfo = unassignedInfo.updateDelay(unassignedInfo.getUnassignedTimeInNanos() + 1, // add 1 tick delay\n- Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"10h\").build(), Settings.EMPTY);\n- long delay = unassignedInfo.getLastComputedLeftDelayNanos();\n- assertThat(delay, equalTo(0L));\n- delay = unassignedInfo.getLastComputedLeftDelayNanos();\n- assertThat(delay, equalTo(0L));\n- }\n-\n- /**\n- * Verifies that delayed allocation calculation are correct.\n- */\n- public void testLeftDelayCalculation() throws Exception {\n+ public void testRemainingDelayCalculation() throws Exception {\n final long baseTime = System.nanoTime();\n- UnassignedInfo unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.NODE_LEFT, \"test\", null, 0, baseTime, System.currentTimeMillis());\n+ UnassignedInfo unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.NODE_LEFT, \"test\", null, 0, baseTime, System.currentTimeMillis(), randomBoolean());\n final long totalDelayNanos = TimeValue.timeValueMillis(10).nanos();\n- final Settings settings = Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), TimeValue.timeValueNanos(totalDelayNanos)).build();\n- unassignedInfo = unassignedInfo.updateDelay(baseTime, settings, Settings.EMPTY);\n- long delay = unassignedInfo.getLastComputedLeftDelayNanos();\n+ final Settings indexSettings = Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), TimeValue.timeValueNanos(totalDelayNanos)).build();\n+ long delay = unassignedInfo.getRemainingDelay(baseTime, indexSettings);\n assertThat(delay, equalTo(totalDelayNanos));\n- assertThat(delay, equalTo(unassignedInfo.getLastComputedLeftDelayNanos()));\n long delta1 = randomIntBetween(1, (int) (totalDelayNanos - 1));\n- unassignedInfo = unassignedInfo.updateDelay(baseTime + delta1, settings, Settings.EMPTY);\n- delay = unassignedInfo.getLastComputedLeftDelayNanos();\n+ delay = unassignedInfo.getRemainingDelay(baseTime + delta1, indexSettings);\n assertThat(delay, equalTo(totalDelayNanos - delta1));\n- assertThat(delay, equalTo(unassignedInfo.getLastComputedLeftDelayNanos()));\n- unassignedInfo = unassignedInfo.updateDelay(baseTime + totalDelayNanos, settings, Settings.EMPTY);\n- delay = unassignedInfo.getLastComputedLeftDelayNanos();\n+ delay = unassignedInfo.getRemainingDelay(baseTime + totalDelayNanos, indexSettings);\n assertThat(delay, equalTo(0L));\n- assertThat(delay, equalTo(unassignedInfo.getLastComputedLeftDelayNanos()));\n- unassignedInfo = unassignedInfo.updateDelay(baseTime + totalDelayNanos + randomIntBetween(1, 20), settings, Settings.EMPTY);\n- delay = unassignedInfo.getLastComputedLeftDelayNanos();\n+ delay = unassignedInfo.getRemainingDelay(baseTime + totalDelayNanos + randomIntBetween(1, 20), indexSettings);\n assertThat(delay, equalTo(0L));\n- assertThat(delay, equalTo(unassignedInfo.getLastComputedLeftDelayNanos()));\n }\n \n \n@@ -344,8 +304,6 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n \n public void testFindNextDelayedAllocation() {\n MockAllocationService allocation = createAllocationService(Settings.EMPTY, new DelayedShardsMockGatewayAllocator());\n- final long baseTime = System.nanoTime();\n- allocation.setNanoTimeOverride(baseTime);\n final TimeValue delayTest1 = TimeValue.timeValueMillis(randomIntBetween(1, 200));\n final TimeValue delayTest2 = TimeValue.timeValueMillis(randomIntBetween(1, 200));\n final long expectMinDelaySettingsNanos = Math.min(delayTest1.nanos(), delayTest2.nanos());\n@@ -366,20 +324,18 @@ public void testFindNextDelayedAllocation() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n+ final long baseTime = System.nanoTime();\n+ allocation.setNanoTimeOverride(baseTime);\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState, \"reroute\")).build();\n \n- final long delta = randomBoolean() ? 0 : randomInt((int) expectMinDelaySettingsNanos);\n+ final long delta = randomBoolean() ? 0 : randomInt((int) expectMinDelaySettingsNanos - 1);\n \n if (delta > 0) {\n allocation.setNanoTimeOverride(baseTime + delta);\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState, \"time moved\")).build();\n }\n \n- long minDelaySetting = UnassignedInfo.findSmallestDelayedAllocationSettingNanos(Settings.EMPTY, clusterState);\n- assertThat(minDelaySetting, equalTo(expectMinDelaySettingsNanos));\n-\n- long nextDelay = UnassignedInfo.findNextDelayedAllocationIn(clusterState);\n- assertThat(nextDelay, equalTo(expectMinDelaySettingsNanos - delta));\n+ assertThat(UnassignedInfo.findNextDelayedAllocation(baseTime + delta, clusterState), equalTo(expectMinDelaySettingsNanos - delta));\n }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java", "status": "modified" }, { "diff": "@@ -521,7 +521,7 @@ public void onFailure(Throwable t) {\n static class NoopRoutingService extends RoutingService {\n \n public NoopRoutingService(Settings settings) {\n- super(settings, null, null, new NoopAllocationService(settings));\n+ super(settings, null, new NoopAllocationService(settings));\n }\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/discovery/zen/NodeJoinControllerTests.java", "status": "modified" }, { "diff": "@@ -233,16 +233,14 @@ public void testDelayedAllocation() {\n // we sometime return empty list of files, make sure we test this as well\n testAllocator.addData(node2, false, null);\n }\n- AllocationService.updateLeftDelayOfUnassignedShards(allocation, Settings.EMPTY);\n boolean changed = testAllocator.allocateUnassigned(allocation);\n- assertThat(changed, equalTo(true));\n+ assertThat(changed, equalTo(false));\n assertThat(allocation.routingNodes().unassigned().ignored().size(), equalTo(1));\n assertThat(allocation.routingNodes().unassigned().ignored().get(0).shardId(), equalTo(shardId));\n \n allocation = onePrimaryOnNode1And1Replica(yesAllocationDeciders(),\n Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), TimeValue.timeValueHours(1)).build(), UnassignedInfo.Reason.NODE_LEFT);\n testAllocator.addData(node2, false, \"MATCH\", new StoreFileMetaData(\"file1\", 10, \"MATCH_CHECKSUM\"));\n- AllocationService.updateLeftDelayOfUnassignedShards(allocation, Settings.EMPTY);\n changed = testAllocator.allocateUnassigned(allocation);\n assertThat(changed, equalTo(true));\n assertThat(allocation.routingNodes().shardsWithState(ShardRoutingState.INITIALIZING).size(), equalTo(1));\n@@ -290,11 +288,15 @@ private RoutingAllocation onePrimaryOnNode1And1Replica(AllocationDeciders decide\n .numberOfShards(1).numberOfReplicas(1)\n .putActiveAllocationIds(0, Sets.newHashSet(primaryShard.allocationId().getId())))\n .build();\n+ // mark shard as delayed if reason is NODE_LEFT\n+ boolean delayed = reason == UnassignedInfo.Reason.NODE_LEFT &&\n+ UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(settings).nanos() > 0;\n RoutingTable routingTable = RoutingTable.builder()\n .add(IndexRoutingTable.builder(shardId.getIndex())\n .addIndexShard(new IndexShardRoutingTable.Builder(shardId)\n .addShard(primaryShard)\n- .addShard(ShardRouting.newUnassigned(shardId, null, false, new UnassignedInfo(reason, null)))\n+ .addShard(ShardRouting.newUnassigned(shardId, null, false,\n+ new UnassignedInfo(reason, null, null, 0, System.nanoTime(), System.currentTimeMillis(), delayed)))\n .build())\n )\n .build();", "filename": "core/src/test/java/org/elasticsearch/gateway/ReplicaShardAllocatorTests.java", "status": "modified" }, { "diff": "@@ -196,7 +196,7 @@ public Decision canAllocate(RoutingNode node, RoutingAllocation allocation) {\n /** A lock {@link AllocationService} allowing tests to override time */\n protected static class MockAllocationService extends AllocationService {\n \n- private Long nanoTimeOverride = null;\n+ private volatile long nanoTimeOverride = -1L;\n \n public MockAllocationService(Settings settings, AllocationDeciders allocationDeciders, GatewayAllocator gatewayAllocator,\n ShardsAllocator shardsAllocator, ClusterInfoService clusterInfoService) {\n@@ -209,7 +209,7 @@ public void setNanoTimeOverride(long nanoTime) {\n \n @Override\n protected long currentNanoTime() {\n- return nanoTimeOverride == null ? super.currentNanoTime() : nanoTimeOverride;\n+ return nanoTimeOverride == -1L ? super.currentNanoTime() : nanoTimeOverride;\n }\n }\n \n@@ -238,16 +238,15 @@ public void applyFailedShards(FailedRerouteAllocation allocation) {}\n @Override\n public boolean allocateUnassigned(RoutingAllocation allocation) {\n final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = allocation.routingNodes().unassigned().iterator();\n- boolean changed = false;\n while (unassignedIterator.hasNext()) {\n ShardRouting shard = unassignedIterator.next();\n IndexMetaData indexMetaData = allocation.metaData().index(shard.getIndexName());\n if (shard.primary() || shard.allocatedPostIndexCreate(indexMetaData) == false) {\n continue;\n }\n- changed |= replicaShardAllocator.ignoreUnassignedIfDelayed(unassignedIterator, shard);\n+ replicaShardAllocator.ignoreUnassignedIfDelayed(unassignedIterator, shard);\n }\n- return changed;\n+ return false;\n }\n }\n }", "filename": "test/framework/src/main/java/org/elasticsearch/test/ESAllocationTestCase.java", "status": "modified" } ] }
{ "body": "The compound processor is used internally in ingest when handling failures. As I understand though it should be an internal concept that users don't know about. If you define a processor that fails, which has an `on_failure` block defined, which fails again, and the pipeline has its own `on_failure`, the metadata about the failure itself will be referring to the compound processor rather than the one that really failed.\n\n```\nGET _ingest/pipeline/_simulate\n{\n \"pipeline\" : {\n \"processors\" : [\n {\n \"convert\" : {\n \"field\": \"doesnotexist\",\n \"type\" : \"integer\",\n \"on_failure\" : [\n {\n \"fail\" : {\n \"message\" : \"custom error message\"\n } \n }\n ]\n }\n }\n ],\n \"on_failure\" : [\n {\n \"set\" : {\n \"field\" : \"failure\",\n \"value\" : {\n \"message\" : \"{{_ingest.on_failure_message}}\",\n \"processor\" : \"{{_ingest.on_failure_processor_type}}\",\n \"tag\" : \"{{_ingest.on_failure_processor_tag}}\"\n }\n }\n }\n ]\n },\n \"docs\" : [\n {\n \"_source\" : {\n \"message\" : \"test\"\n }\n }\n ]\n}\n```\n\n```\n{\n \"docs\": [\n {\n \"doc\": {\n \"_type\": \"_type\",\n \"_index\": \"_index\",\n \"_id\": \"_id\",\n \"_source\": {\n \"message\": \"test\",\n \"failure\": {\n \"tag\": \"CompoundProcessor-null-null\",\n \"message\": \"custom error message\",\n \"processor\": \"compound\"\n }\n },\n \"_ingest\": {\n \"timestamp\": \"2016-04-18T13:20:28.981+0000\"\n }\n }\n }\n ]\n}\n```\n\nThe `tag` and the `processor` that goes in the document as part of the failure object are a bit off, they should be coming from the fail processor, which caused the failure. That information gets lost with the double wrapping.\n\nHere is also a failing unit test to add to `CompoundProcessorTests` that reproduces the problem:\n\n```\npublic void testFail() throws Exception {\n TestProcessor firstProcessor = new TestProcessor(\"id1\", \"first\", ingestDocument -> {throw new RuntimeException(\"error\");});\n FailProcessor failProcessor = new FailProcessor(\"tag_fail\", new TestTemplateService.MockTemplate(\"custom error message\"));\n TestProcessor secondProcessor = new TestProcessor(\"id2\", \"second\", ingestDocument -> {\n Map<String, String> ingestMetadata = ingestDocument.getIngestMetadata();\n assertThat(ingestMetadata.size(), equalTo(3));\n assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_MESSAGE_FIELD), equalTo(\"custom error message\"));\n assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_PROCESSOR_TYPE_FIELD), equalTo(\"fail\"));\n assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_PROCESSOR_TAG_FIELD), equalTo(\"tag_fail\"));\n });\n\n CompoundProcessor failCompoundProcessor = new CompoundProcessor(Collections.singletonList(firstProcessor),\n Collections.singletonList(failProcessor));\n\n CompoundProcessor compoundProcessor = new CompoundProcessor(Collections.singletonList(failCompoundProcessor),\n Collections.singletonList(secondProcessor));\n compoundProcessor.execute(ingestDocument);\n\n assertThat(firstProcessor.getInvokedCounter(), equalTo(1));\n assertThat(secondProcessor.getInvokedCounter(), equalTo(1));\n}\n```\n", "comments": [ { "body": "@martijnvg @talevy I bumped into this while preparing a demo, I looked into it but I didn't manage to find a quick solution, tricky.\n", "created_at": "2016-04-18T14:32:01Z" }, { "body": "Right, any exception that gets bubbled up to a higher level (in this case the root processor level) we fetch the ingest `tag` and `processor` attributes.\n\nI think we need to solve this differently. Just like in the processor factories we always throw an error with the the processor tag, processor type and property name present. I think we should do the same for exceptions being thrown at ingest preprocess time and then instead of fetching tag and type from current processor in `CompoundProcessor` we extract that information from the exception itself.\n", "created_at": "2016-04-18T15:40:11Z" }, { "body": "I am aware of this issue. I had a fork of this feature that hid the `CompoundProcessor` away, but it made for really ugly non-reusable code. I will try at it again.\n", "created_at": "2016-04-19T14:57:41Z" } ], "number": 17823, "title": "Ingest failure metadata fields refer to compound processor" }
{ "body": "Fixes #17823\n", "number": 18342, "review_comments": [ { "body": "This test originally exposed the bug, and let it pass. no more!\n", "created_at": "2016-05-13T22:06:11Z" }, { "body": "actually, this needs to handle `CompoundProcessorException` separately as well\n", "created_at": "2016-05-13T22:17:59Z" }, { "body": "I'm not sure this method name is right. Maybe I just don't know this code that well though.\n", "created_at": "2016-05-16T16:38:51Z" }, { "body": "Yup. I just don't know it well enough. Disregard.\n", "created_at": "2016-05-16T16:39:48Z" }, { "body": "I like `hasSize` better for this because it gives a nicer error message on failure.\n", "created_at": "2016-05-16T16:40:52Z" }, { "body": "haha, I don't blame you. I had a really awkward time trying to name all of these subtly different scenarios\n", "created_at": "2016-05-16T18:24:50Z" }, { "body": "noted. will update\n", "created_at": "2016-05-16T18:27:53Z" }, { "body": "This was updated so that we do not expose `CompoundProcessorException` outside of the Ingest Pipeline.\n", "created_at": "2016-05-17T16:56:10Z" } ], "title": "Expose underlying processor to blame for thrown exception within CompoundProcessor" }
{ "commits": [ { "message": "Expose underlying processor to blame for thrown exception within CompoundProcessor\n\nFixes #17823" } ], "files": [ { "diff": "@@ -20,14 +20,13 @@\n \n package org.elasticsearch.ingest.core;\n \n-import org.elasticsearch.common.util.iterable.Iterables;\n+import org.elasticsearch.ElasticsearchException;\n \n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.List;\n import java.util.Map;\n-import java.util.Objects;\n import java.util.stream.Collectors;\n \n /**\n@@ -94,30 +93,38 @@ public void execute(IngestDocument ingestDocument) throws Exception {\n try {\n processor.execute(ingestDocument);\n } catch (Exception e) {\n+ ElasticsearchException compoundProcessorException = newCompoundProcessorException(e, processor.getType(), processor.getTag());\n if (onFailureProcessors.isEmpty()) {\n- throw e;\n+ throw compoundProcessorException;\n } else {\n- executeOnFailure(ingestDocument, e, processor.getType(), processor.getTag());\n+ executeOnFailure(ingestDocument, compoundProcessorException);\n }\n- break;\n }\n }\n }\n \n- void executeOnFailure(IngestDocument ingestDocument, Exception cause, String failedProcessorType, String failedProcessorTag) throws Exception {\n+ void executeOnFailure(IngestDocument ingestDocument, ElasticsearchException exception) throws Exception {\n try {\n- putFailureMetadata(ingestDocument, cause, failedProcessorType, failedProcessorTag);\n+ putFailureMetadata(ingestDocument, exception);\n for (Processor processor : onFailureProcessors) {\n- processor.execute(ingestDocument);\n+ try {\n+ processor.execute(ingestDocument);\n+ } catch (Exception e) {\n+ throw newCompoundProcessorException(e, processor.getType(), processor.getTag());\n+ }\n }\n } finally {\n removeFailureMetadata(ingestDocument);\n }\n }\n \n- private void putFailureMetadata(IngestDocument ingestDocument, Exception cause, String failedProcessorType, String failedProcessorTag) {\n+ private void putFailureMetadata(IngestDocument ingestDocument, ElasticsearchException cause) {\n+ List<String> processorTypeHeader = cause.getHeader(\"processor_type\");\n+ List<String> processorTagHeader = cause.getHeader(\"processor_tag\");\n+ String failedProcessorType = (processorTypeHeader != null) ? processorTypeHeader.get(0) : null;\n+ String failedProcessorTag = (processorTagHeader != null) ? processorTagHeader.get(0) : null;\n Map<String, String> ingestMetadata = ingestDocument.getIngestMetadata();\n- ingestMetadata.put(ON_FAILURE_MESSAGE_FIELD, cause.getMessage());\n+ ingestMetadata.put(ON_FAILURE_MESSAGE_FIELD, cause.getRootCause().getMessage());\n ingestMetadata.put(ON_FAILURE_PROCESSOR_TYPE_FIELD, failedProcessorType);\n ingestMetadata.put(ON_FAILURE_PROCESSOR_TAG_FIELD, failedProcessorTag);\n }\n@@ -128,4 +135,21 @@ private void removeFailureMetadata(IngestDocument ingestDocument) {\n ingestMetadata.remove(ON_FAILURE_PROCESSOR_TYPE_FIELD);\n ingestMetadata.remove(ON_FAILURE_PROCESSOR_TAG_FIELD);\n }\n+\n+ private ElasticsearchException newCompoundProcessorException(Exception e, String processorType, String processorTag) {\n+ if (e instanceof ElasticsearchException && ((ElasticsearchException)e).getHeader(\"processor_type\") != null) {\n+ return (ElasticsearchException) e;\n+ }\n+\n+ ElasticsearchException exception = new ElasticsearchException(new IllegalArgumentException(e));\n+\n+ if (processorType != null) {\n+ exception.addHeader(\"processor_type\", processorType);\n+ }\n+ if (processorTag != null) {\n+ exception.addHeader(\"processor_tag\", processorTag);\n+ }\n+\n+ return exception;\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/ingest/core/CompoundProcessor.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.ingest;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.ingest.RandomDocumentPicks;\n import org.elasticsearch.ingest.TestProcessor;\n@@ -167,7 +168,8 @@ public void testExecuteItemWithFailure() throws Exception {\n SimulateDocumentBaseResult simulateDocumentBaseResult = (SimulateDocumentBaseResult) actualItemResponse;\n assertThat(simulateDocumentBaseResult.getIngestDocument(), nullValue());\n assertThat(simulateDocumentBaseResult.getFailure(), instanceOf(RuntimeException.class));\n- RuntimeException runtimeException = (RuntimeException) simulateDocumentBaseResult.getFailure();\n- assertThat(runtimeException.getMessage(), equalTo(\"processor failed\"));\n+ Exception exception = simulateDocumentBaseResult.getFailure();\n+ assertThat(exception, instanceOf(ElasticsearchException.class));\n+ assertThat(exception.getMessage(), equalTo(\"java.lang.IllegalArgumentException: java.lang.RuntimeException: processor failed\"));\n }\n }", "filename": "core/src/test/java/org/elasticsearch/action/ingest/SimulateExecutionServiceTests.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.ingest;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.bulk.BulkItemResponse;\n@@ -154,7 +155,8 @@ public void testBulkWithIngestFailures() throws Exception {\n BulkItemResponse itemResponse = response.getItems()[i];\n if (i % 2 == 0) {\n BulkItemResponse.Failure failure = itemResponse.getFailure();\n- assertThat(failure.getMessage(), equalTo(\"java.lang.IllegalArgumentException: test processor failed\"));\n+ ElasticsearchException compoundProcessorException = (ElasticsearchException) failure.getCause();\n+ assertThat(compoundProcessorException.getRootCause().getMessage(), equalTo(\"test processor failed\"));\n } else {\n IndexResponse indexResponse = itemResponse.getResponse();\n assertThat(\"Expected a successful response but found failure [\" + itemResponse.getFailure() + \"].\",", "filename": "core/src/test/java/org/elasticsearch/ingest/IngestClientIT.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.ingest;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.bulk.BulkRequest;\n@@ -188,6 +189,8 @@ public void testExecuteFailure() throws Exception {\n \n public void testExecuteSuccessWithOnFailure() throws Exception {\n Processor processor = mock(Processor.class);\n+ when(processor.getType()).thenReturn(\"mock_processor_type\");\n+ when(processor.getTag()).thenReturn(\"mock_processor_tag\");\n Processor onFailureProcessor = mock(Processor.class);\n CompoundProcessor compoundProcessor = new CompoundProcessor(Collections.singletonList(processor), Collections.singletonList(new CompoundProcessor(onFailureProcessor)));\n when(store.get(\"_id\")).thenReturn(new Pipeline(\"_id\", \"_description\", compoundProcessor));\n@@ -198,7 +201,7 @@ public void testExecuteSuccessWithOnFailure() throws Exception {\n @SuppressWarnings(\"unchecked\")\n Consumer<Boolean> completionHandler = mock(Consumer.class);\n executionService.executeIndexRequest(indexRequest, failureHandler, completionHandler);\n- verify(failureHandler, never()).accept(any(RuntimeException.class));\n+ verify(failureHandler, never()).accept(any(ElasticsearchException.class));\n verify(completionHandler, times(1)).accept(true);\n }\n ", "filename": "core/src/test/java/org/elasticsearch/ingest/PipelineExecutionServiceTests.java", "status": "modified" }, { "diff": "@@ -19,21 +19,17 @@\n \n package org.elasticsearch.ingest.core;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ingest.TestProcessor;\n-import org.elasticsearch.ingest.TestTemplateService;\n-import org.elasticsearch.ingest.processor.AppendProcessor;\n-import org.elasticsearch.ingest.processor.SetProcessor;\n-import org.elasticsearch.ingest.processor.SplitProcessor;\n import org.elasticsearch.test.ESTestCase;\n import org.junit.Before;\n \n-import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashMap;\n-import java.util.List;\n import java.util.Map;\n \n import static org.hamcrest.CoreMatchers.equalTo;\n+import static org.hamcrest.Matchers.hasSize;\n import static org.hamcrest.Matchers.is;\n \n public class CompoundProcessorTests extends ESTestCase {\n@@ -70,8 +66,8 @@ public void testSingleProcessorWithException() throws Exception {\n try {\n compoundProcessor.execute(ingestDocument);\n fail(\"should throw exception\");\n- } catch (Exception e) {\n- assertThat(e.getMessage(), equalTo(\"error\"));\n+ } catch (ElasticsearchException e) {\n+ assertThat(e.getRootCause().getMessage(), equalTo(\"error\"));\n }\n assertThat(processor.getInvokedCounter(), equalTo(1));\n }\n@@ -117,4 +113,68 @@ public void testSingleProcessorWithNestedFailures() throws Exception {\n assertThat(processorToFail.getInvokedCounter(), equalTo(1));\n assertThat(lastProcessor.getInvokedCounter(), equalTo(1));\n }\n+\n+ public void testCompoundProcessorExceptionFailWithoutOnFailure() throws Exception {\n+ TestProcessor firstProcessor = new TestProcessor(\"id1\", \"first\", ingestDocument -> {throw new RuntimeException(\"error\");});\n+ TestProcessor secondProcessor = new TestProcessor(\"id3\", \"second\", ingestDocument -> {\n+ Map<String, String> ingestMetadata = ingestDocument.getIngestMetadata();\n+ assertThat(ingestMetadata.entrySet(), hasSize(3));\n+ assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_MESSAGE_FIELD), equalTo(\"error\"));\n+ assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_PROCESSOR_TYPE_FIELD), equalTo(\"first\"));\n+ assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_PROCESSOR_TAG_FIELD), equalTo(\"id1\"));\n+ });\n+\n+ CompoundProcessor failCompoundProcessor = new CompoundProcessor(firstProcessor);\n+\n+ CompoundProcessor compoundProcessor = new CompoundProcessor(Collections.singletonList(failCompoundProcessor),\n+ Collections.singletonList(secondProcessor));\n+ compoundProcessor.execute(ingestDocument);\n+\n+ assertThat(firstProcessor.getInvokedCounter(), equalTo(1));\n+ assertThat(secondProcessor.getInvokedCounter(), equalTo(1));\n+ }\n+\n+ public void testCompoundProcessorExceptionFail() throws Exception {\n+ TestProcessor firstProcessor = new TestProcessor(\"id1\", \"first\", ingestDocument -> {throw new RuntimeException(\"error\");});\n+ TestProcessor failProcessor = new TestProcessor(\"tag_fail\", \"fail\", ingestDocument -> {throw new RuntimeException(\"custom error message\");});\n+ TestProcessor secondProcessor = new TestProcessor(\"id3\", \"second\", ingestDocument -> {\n+ Map<String, String> ingestMetadata = ingestDocument.getIngestMetadata();\n+ assertThat(ingestMetadata.entrySet(), hasSize(3));\n+ assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_MESSAGE_FIELD), equalTo(\"custom error message\"));\n+ assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_PROCESSOR_TYPE_FIELD), equalTo(\"fail\"));\n+ assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_PROCESSOR_TAG_FIELD), equalTo(\"tag_fail\"));\n+ });\n+\n+ CompoundProcessor failCompoundProcessor = new CompoundProcessor(Collections.singletonList(firstProcessor),\n+ Collections.singletonList(failProcessor));\n+\n+ CompoundProcessor compoundProcessor = new CompoundProcessor(Collections.singletonList(failCompoundProcessor),\n+ Collections.singletonList(secondProcessor));\n+ compoundProcessor.execute(ingestDocument);\n+\n+ assertThat(firstProcessor.getInvokedCounter(), equalTo(1));\n+ assertThat(secondProcessor.getInvokedCounter(), equalTo(1));\n+ }\n+\n+ public void testCompoundProcessorExceptionFailInOnFailure() throws Exception {\n+ TestProcessor firstProcessor = new TestProcessor(\"id1\", \"first\", ingestDocument -> {throw new RuntimeException(\"error\");});\n+ TestProcessor failProcessor = new TestProcessor(\"tag_fail\", \"fail\", ingestDocument -> {throw new RuntimeException(\"custom error message\");});\n+ TestProcessor secondProcessor = new TestProcessor(\"id3\", \"second\", ingestDocument -> {\n+ Map<String, String> ingestMetadata = ingestDocument.getIngestMetadata();\n+ assertThat(ingestMetadata.entrySet(), hasSize(3));\n+ assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_MESSAGE_FIELD), equalTo(\"custom error message\"));\n+ assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_PROCESSOR_TYPE_FIELD), equalTo(\"fail\"));\n+ assertThat(ingestMetadata.get(CompoundProcessor.ON_FAILURE_PROCESSOR_TAG_FIELD), equalTo(\"tag_fail\"));\n+ });\n+\n+ CompoundProcessor failCompoundProcessor = new CompoundProcessor(Collections.singletonList(firstProcessor),\n+ Collections.singletonList(new CompoundProcessor(failProcessor)));\n+\n+ CompoundProcessor compoundProcessor = new CompoundProcessor(Collections.singletonList(failCompoundProcessor),\n+ Collections.singletonList(secondProcessor));\n+ compoundProcessor.execute(ingestDocument);\n+\n+ assertThat(firstProcessor.getInvokedCounter(), equalTo(1));\n+ assertThat(secondProcessor.getInvokedCounter(), equalTo(1));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/ingest/core/CompoundProcessorTests.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.ingest.processor;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ingest.SimulateProcessorResult;\n import org.elasticsearch.ingest.TestProcessor;\n import org.elasticsearch.ingest.core.CompoundProcessor;\n@@ -73,8 +74,9 @@ public void testActualCompoundProcessorWithoutOnFailure() throws Exception {\n \n try {\n trackingProcessor.execute(ingestDocument);\n- } catch (Exception e) {\n- assertThat(e.getMessage(), equalTo(exception.getMessage()));\n+ fail(\"processor should throw exception\");\n+ } catch (ElasticsearchException e) {\n+ assertThat(e.getRootCause().getMessage(), equalTo(exception.getMessage()));\n }\n \n SimulateProcessorResult expectedFirstResult = new SimulateProcessorResult(testProcessor.getTag(), ingestDocument);\n@@ -121,8 +123,8 @@ public void testActualCompoundProcessorWithOnFailure() throws Exception {\n \n metadata = resultList.get(3).getIngestDocument().getIngestMetadata();\n assertThat(metadata.get(ON_FAILURE_MESSAGE_FIELD), equalTo(\"fail\"));\n- assertThat(metadata.get(ON_FAILURE_PROCESSOR_TYPE_FIELD), equalTo(\"compound\"));\n- assertThat(metadata.get(ON_FAILURE_PROCESSOR_TAG_FIELD), equalTo(\"CompoundProcessor-fail-success-success-fail\"));\n+ assertThat(metadata.get(ON_FAILURE_PROCESSOR_TYPE_FIELD), equalTo(\"test\"));\n+ assertThat(metadata.get(ON_FAILURE_PROCESSOR_TAG_FIELD), equalTo(\"fail\"));\n assertThat(resultList.get(3).getFailure(), nullValue());\n assertThat(resultList.get(3).getProcessorTag(), equalTo(expectedSuccessResult.getProcessorTag()));\n }", "filename": "core/src/test/java/org/elasticsearch/ingest/processor/TrackingResultProcessorTests.java", "status": "modified" } ] }
{ "body": "Index level setting is not allowed in elastic search.yml with 5.0.0-alpha2.\n\nAnd with _setting or _template API I got:\n{\n\"error\": {\n\"root_cause\": [\n{\n\"type\": \"illegal_argument_exception\",\n\"reason\": \"unknown setting [index.query.bool.max_clause_count]\"\n}\n],\n\"type\": \"illegal_argument_exception\",\n\"reason\": \"unknown setting [index.query.bool.max_clause_count]\"\n},\n\"status\": 400\n}\n\nIs this a bug? Any work-around?\n", "comments": [ { "body": "this is a bug - this setting has not been registered! good catch thanks for reporting this!!!\n", "created_at": "2016-05-13T19:35:29Z" } ], "number": 18336, "title": "index.query.bool.max_clause_count can not be set" }
{ "body": "This commit registers `indices.query.bool.max_clause_count` as a node\nlevel setting and removes support for its synonym setting\n`index.query.bool.max_clause_count`.\n\nCloses #18336\n", "number": 18341, "review_comments": [], "title": "Register `indices.query.bool.max_clause_count` setting" }
{ "commits": [ { "message": "Register `indices.query.bool.max_clause_count` setting\n\nThis commit registers `indices.query.bool.max_clause_count` as a node\nlevel setting and removes support for its synonym setting\n`index.query.bool.max_clause_count`.\n\nCloses #18336" }, { "message": "Enable settings DYM for unknown setting [index.query.bool.max_clause_count]" }, { "message": "remove unintentional deprecation" }, { "message": "fix line length" } ], "files": [ { "diff": "@@ -87,6 +87,7 @@\n import org.elasticsearch.repositories.uri.URLRepository;\n import org.elasticsearch.rest.BaseRestHandler;\n import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.search.SearchModule;\n import org.elasticsearch.search.SearchService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.Transport;\n@@ -417,6 +418,7 @@ public void apply(Settings value, Settings current, Settings previous) {\n ResourceWatcherService.ENABLED,\n ResourceWatcherService.RELOAD_INTERVAL_HIGH,\n ResourceWatcherService.RELOAD_INTERVAL_MEDIUM,\n- ResourceWatcherService.RELOAD_INTERVAL_LOW\n+ ResourceWatcherService.RELOAD_INTERVAL_LOW,\n+ SearchModule.INDICES_MAX_CLAUSE_COUNT_SETTING\n )));\n }", "filename": "core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java", "status": "modified" }, { "diff": "@@ -65,7 +65,12 @@ public SettingsModule(Settings settings) {\n protected void configure() {\n final IndexScopedSettings indexScopedSettings = new IndexScopedSettings(settings, new HashSet<>(this.indexSettings.values()));\n final ClusterSettings clusterSettings = new ClusterSettings(settings, new HashSet<>(this.nodeSettings.values()));\n- Settings indexSettings = settings.filter((s) -> s.startsWith(\"index.\") && clusterSettings.get(s) == null);\n+ Settings indexSettings = settings.filter((s) -> (s.startsWith(\"index.\") &&\n+ // special case - we want to get Did you mean indices.query.bool.max_clause_count\n+ // which means we need to by-pass this check for this setting\n+ // TODO remove in 6.0!!\n+ \"index.query.bool.max_clause_count\".equals(s) == false)\n+ && clusterSettings.get(s) == null);\n if (indexSettings.isEmpty() == false) {\n try {\n String separator = IntStream.range(0, 85).mapToObj(s -> \"*\").collect(Collectors.joining(\"\")).trim();", "filename": "core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.lucene.search.function.ScoreFunction;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.ParseFieldRegistry;\n import org.elasticsearch.index.percolator.PercolatorHighlightSubFetchPhase;\n@@ -290,6 +291,8 @@ public class SearchModule extends AbstractModule {\n \n private final Settings settings;\n private final NamedWriteableRegistry namedWriteableRegistry;\n+ public static final Setting<Integer> INDICES_MAX_CLAUSE_COUNT_SETTING = Setting.intSetting(\"indices.query.bool.max_clause_count\",\n+ 1024, 1, Integer.MAX_VALUE, Setting.Property.NodeScope);\n \n // pkg private so tests can mock\n Class<? extends SearchService> searchServiceImpl = SearchService.class;\n@@ -650,8 +653,7 @@ private void registerBuiltinQueryParsers() {\n registerQuery(MatchAllQueryBuilder::new, MatchAllQueryBuilder::fromXContent, MatchAllQueryBuilder.QUERY_NAME_FIELD);\n registerQuery(QueryStringQueryBuilder::new, QueryStringQueryBuilder::fromXContent, QueryStringQueryBuilder.QUERY_NAME_FIELD);\n registerQuery(BoostingQueryBuilder::new, BoostingQueryBuilder::fromXContent, BoostingQueryBuilder.QUERY_NAME_FIELD);\n- BooleanQuery.setMaxClauseCount(settings.getAsInt(\"index.query.bool.max_clause_count\",\n- settings.getAsInt(\"indices.query.bool.max_clause_count\", BooleanQuery.getMaxClauseCount())));\n+ BooleanQuery.setMaxClauseCount(INDICES_MAX_CLAUSE_COUNT_SETTING.get(settings));\n registerQuery(BoolQueryBuilder::new, BoolQueryBuilder::fromXContent, BoolQueryBuilder.QUERY_NAME_FIELD);\n registerQuery(TermQueryBuilder::new, TermQueryBuilder::fromXContent, TermQueryBuilder.QUERY_NAME_FIELD);\n registerQuery(TermsQueryBuilder::new, TermsQueryBuilder::fromXContent, TermsQueryBuilder.QUERY_NAME_FIELD);", "filename": "core/src/main/java/org/elasticsearch/search/SearchModule.java", "status": "modified" }, { "diff": "@@ -208,4 +208,13 @@ public void testMutuallyExclusiveScopes() {\n assertThat(e.getMessage(), containsString(\"Cannot register setting [foo.bar] twice\"));\n }\n }\n+\n+ public void testOldMaxClauseCountSetting() {\n+ Settings settings = Settings.builder().put(\"index.query.bool.max_clause_count\", 1024).build();\n+ SettingsModule module = new SettingsModule(settings);\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n+ () -> assertInstanceBinding(module, Settings.class, (s) -> s == settings));\n+ assertEquals(\"unknown setting [index.query.bool.max_clause_count] did you mean [indices.query.bool.max_clause_count]?\",\n+ ex.getMessage());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/settings/SettingsModuleTests.java", "status": "modified" }, { "diff": "@@ -258,3 +258,9 @@ Previously script mode settings (e.g., \"script.inline: true\",\n Prior to 5.0 a third option could be specified for the `script.inline` and\n `script.stored` settings (\"sandbox\"). This has been removed, You can now only\n set `script.line: true` or `script.stored: true`.\n+\n+==== Search settings\n+\n+The setting `index.query.bool.max_clause_count` has been removed. In order to\n+set the maximum number of boolean clauses `indices.query.bool.max_clause_count`\n+should be used instead.", "filename": "docs/reference/migration/migrate_5_0/settings.asciidoc", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: `2.3.2`\n**JVM version**: `1.8`\n**OS version**: not relevant\n\n**Description of the problem including expected versus actual behavior**:\n- Actual behavior\n\nFollowing [CORS implementation refactoring changes](https://github.com/elastic/elasticsearch/pull/16436) when a request contains an `Origin` header, it is considered as a cross-origin request and `CORS` filtering is applied based on configuration. \n\nThis prevents the [head plugin](https://github.com/mobz/elasticsearch-head), if `CORS` is not enabled or not configured to enable a specific origin, to work with `Chrome` or `Safari` as [they add an origin header for every POST request](http://stackoverflow.com/a/15514049) \n\nMore specifically `POST requests` from those browsers will be returned `403` (`Forbidden`) \n- Expected behavior \n\nAccording to [RFC 6454](https://tools.ietf.org/html/rfc6454#section-7.3):\n\n> The user agent MAY include an Origin header field in any HTTP request.\n\n[CorsHandler](https://github.com/elastic/elasticsearch/blob/2.3/core/src/main/java/org/elasticsearch/http/netty/cors/CorsHandler.java) should check the `Host` header, and if it matches the domain in the `Origin` header, don't treat the request as a cross-origin request, and so allow it to be performed. \n\n**Steps to reproduce**:\n- 1 - Disable `CORS` in elasticsearch configuration\n\n``` yaml\nhttp.cors.enabled: false\n```\n- 2 - Send a `same origin POST` request with an `Origin` header \n\n```\ncurl -H \"Origin: http://localhost:9200\" -X POST http://localhost:9200/_all/_search\n```\n", "comments": [ { "body": "@mchv thank you for reporting! Fixed by commit https://github.com/elastic/elasticsearch/commit/d3f205b4a2c6e61810e818de3be83164cd2b69a0\n\nIt will be part of 2.3.3\n", "created_at": "2016-05-11T19:23:02Z" } ], "number": 18256, "title": "CORS filtering applied for same origin requests" }
{ "body": "When CORS is enabled, permit requests from the same origin as the request host, as the request is not a cross origin.\n\nCloses #18256 \n", "number": 18278, "review_comments": [ { "body": "My first thought was that the regex `^https?://` is probably faster. A simple benchmark showed it to be around 40% faster. My second thought was it's shame that we have to create so much garbage here (compiling a regex, and creating the substring). But we do not need to compile the regex every time, we can just create a static regex:\n\n``` diff\ndiff --git a/core/src/main/java/org/elasticsearch/http/netty/cors/CorsHandler.java b/core/src/main/java/org/elasticsearch/http/netty/cors/CorsHandler.java\nindex 4eb6323..2c8a575 100644\n--- a/core/src/main/java/org/elasticsearch/http/netty/cors/CorsHandler.java\n+++ b/core/src/main/java/org/elasticsearch/http/netty/cors/CorsHandler.java\n@@ -33,6 +33,7 @@ import org.jboss.netty.handler.codec.http.HttpResponse;\n\n import java.util.HashSet;\n import java.util.Set;\n+import java.util.regex.Pattern;\n\n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.ACCESS_CONTROL_ALLOW_CREDENTIALS;\n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.ACCESS_CONTROL_ALLOW_HEADERS;\n@@ -131,10 +132,12 @@ public class CorsHandler extends SimpleChannelUpstreamHandler {\n .addListener(ChannelFutureListener.CLOSE);\n }\n\n+\n+ private static Pattern PATTERN = Pattern.compile(\"^https?://\");\n private static boolean isSameOrigin(final String origin, final String host) {\n if (Strings.isNullOrEmpty(host) == false) {\n // strip protocol from origin\n- final String originDomain = origin.replaceFirst(\"(http|https)://\", \"\");\n+ final String originDomain = PATTERN.matcher(origin).replaceFirst(\"\");\n if (host.equals(originDomain)) {\n return true;\n }\n```\n\nThe same benchmark showed this to be around 250% faster and creates one less object.\n", "created_at": "2016-05-11T17:54:12Z" }, { "body": "@jasontedor this is great, i will incorporate and push up a new commit\n", "created_at": "2016-05-11T17:57:09Z" } ], "title": "CORS should permit same origin requests" }
{ "commits": [ { "message": "When CORS is enabled, permit requests from the same origin as the\nrequest host, as the request is not a cross origin.\n\nCloses #18256" }, { "message": "Improve regex efficiency" } ], "files": [ { "diff": "@@ -32,14 +32,15 @@\n import org.jboss.netty.handler.codec.http.HttpResponse;\n \n import java.util.HashSet;\n-import java.util.Iterator;\n import java.util.Set;\n+import java.util.regex.Pattern;\n \n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.ACCESS_CONTROL_ALLOW_CREDENTIALS;\n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.ACCESS_CONTROL_ALLOW_HEADERS;\n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.ACCESS_CONTROL_ALLOW_METHODS;\n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN;\n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.ACCESS_CONTROL_MAX_AGE;\n+import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.HOST;\n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.ORIGIN;\n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.USER_AGENT;\n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.VARY;\n@@ -57,6 +58,7 @@\n public class CorsHandler extends SimpleChannelUpstreamHandler {\n \n public static final String ANY_ORIGIN = \"*\";\n+ private static Pattern PATTERN = Pattern.compile(\"^https?://\");\n private final CorsConfig config;\n \n private HttpRequest request;\n@@ -98,7 +100,7 @@ public static void setCorsResponseHeaders(HttpRequest request, HttpResponse resp\n final String originHeaderVal;\n if (config.isAnyOriginSupported()) {\n originHeaderVal = ANY_ORIGIN;\n- } else if (config.isOriginAllowed(originHeader)) {\n+ } else if (config.isOriginAllowed(originHeader) || isSameOrigin(originHeader, request.headers().get(HOST))) {\n originHeaderVal = originHeader;\n } else {\n originHeaderVal = null;\n@@ -131,6 +133,17 @@ private static void forbidden(final ChannelHandlerContext ctx, final HttpRequest\n .addListener(ChannelFutureListener.CLOSE);\n }\n \n+ private static boolean isSameOrigin(final String origin, final String host) {\n+ if (Strings.isNullOrEmpty(host) == false) {\n+ // strip protocol from origin\n+ final String originDomain = PATTERN.matcher(origin).replaceFirst(\"\");\n+ if (host.equals(originDomain)) {\n+ return true;\n+ }\n+ }\n+ return false;\n+ }\n+\n /**\n * This is a non CORS specification feature which enables the setting of preflight\n * response headers that might be required by intermediaries.\n@@ -181,6 +194,11 @@ private boolean validateOrigin() {\n return true;\n }\n \n+ // if the origin is the same as the host of the request, then allow\n+ if (isSameOrigin(origin, request.headers().get(HOST))) {\n+ return true;\n+ }\n+\n return config.isOriginAllowed(origin);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/http/netty/cors/CorsHandler.java", "status": "modified" }, { "diff": "@@ -55,8 +55,6 @@\n \n public class NettyHttpChannelTests extends ESTestCase {\n \n- private static final String ORIGIN = \"remote-host\";\n-\n private NetworkService networkService;\n private ThreadPool threadPool;\n private MockBigArrays bigArrays;\n@@ -86,55 +84,93 @@ public void testCorsEnabledWithoutAllowOrigins() {\n Settings settings = Settings.builder()\n .put(NettyHttpServerTransport.SETTING_CORS_ENABLED, true)\n .build();\n- HttpResponse response = execRequestWithCors(settings, ORIGIN);\n+ HttpResponse response = execRequestWithCors(settings, \"remote-host\", \"request-host\");\n // inspect response and validate\n assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN), nullValue());\n }\n \n+ @Test\n public void testCorsEnabledWithAllowOrigins() {\n- final String originValue = ORIGIN;\n+ final String originValue = \"remote-host\";\n // create a http transport with CORS enabled and allow origin configured\n Settings settings = Settings.builder()\n .put(SETTING_CORS_ENABLED, true)\n .put(SETTING_CORS_ALLOW_ORIGIN, originValue)\n .build();\n- HttpResponse response = execRequestWithCors(settings, originValue);\n+ HttpResponse response = execRequestWithCors(settings, originValue, \"request-host\");\n // inspect response and validate\n assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN), notNullValue());\n String allowedOrigins = response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN);\n assertThat(allowedOrigins, is(originValue));\n }\n \n+ @Test\n+ public void testCorsAllowOriginWithSameHost() {\n+ String originValue = \"remote-host\";\n+ String host = \"remote-host\";\n+ // create a http transport with CORS enabled\n+ Settings settings = Settings.builder()\n+ .put(NettyHttpServerTransport.SETTING_CORS_ENABLED, true)\n+ .build();\n+ HttpResponse response = execRequestWithCors(settings, originValue, host);\n+ // inspect response and validate\n+ assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN), notNullValue());\n+ String allowedOrigins = response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN);\n+ assertThat(allowedOrigins, is(originValue));\n+\n+ originValue = \"http://\" + originValue;\n+ response = execRequestWithCors(settings, originValue, host);\n+ assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN), notNullValue());\n+ allowedOrigins = response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN);\n+ assertThat(allowedOrigins, is(originValue));\n+\n+ originValue = originValue + \":5555\";\n+ host = host + \":5555\";\n+ response = execRequestWithCors(settings, originValue, host);\n+ assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN), notNullValue());\n+ allowedOrigins = response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN);\n+ assertThat(allowedOrigins, is(originValue));\n+\n+ originValue = originValue.replace(\"http\", \"https\");\n+ response = execRequestWithCors(settings, originValue, host);\n+ assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN), notNullValue());\n+ allowedOrigins = response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN);\n+ assertThat(allowedOrigins, is(originValue));\n+ }\n+\n+ @Test\n public void testThatStringLiteralWorksOnMatch() {\n- final String originValue = ORIGIN;\n+ final String originValue = \"remote-host\";\n Settings settings = Settings.builder()\n .put(SETTING_CORS_ENABLED, true)\n .put(SETTING_CORS_ALLOW_ORIGIN, originValue)\n .put(SETTING_CORS_ALLOW_METHODS, \"get, options, post\")\n .put(SETTING_CORS_ALLOW_CREDENTIALS, true)\n .build();\n- HttpResponse response = execRequestWithCors(settings, originValue);\n+ HttpResponse response = execRequestWithCors(settings, originValue, \"request-host\");\n // inspect response and validate\n assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN), notNullValue());\n String allowedOrigins = response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN);\n assertThat(allowedOrigins, is(originValue));\n assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_CREDENTIALS), equalTo(\"true\"));\n }\n \n+ @Test\n public void testThatAnyOriginWorks() {\n final String originValue = CorsHandler.ANY_ORIGIN;\n Settings settings = Settings.builder()\n .put(SETTING_CORS_ENABLED, true)\n .put(SETTING_CORS_ALLOW_ORIGIN, originValue)\n .build();\n- HttpResponse response = execRequestWithCors(settings, originValue);\n+ HttpResponse response = execRequestWithCors(settings, originValue, \"request-host\");\n // inspect response and validate\n assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN), notNullValue());\n String allowedOrigins = response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN);\n assertThat(allowedOrigins, is(originValue));\n assertThat(response.headers().get(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_CREDENTIALS), nullValue());\n }\n \n+ @Test\n public void testHeadersSet() {\n Settings settings = Settings.builder().build();\n httpServerTransport = new NettyHttpServerTransport(settings, networkService, bigArrays);\n@@ -162,12 +198,13 @@ public void testHeadersSet() {\n assertThat(response.headers().get(HttpHeaders.Names.CONTENT_TYPE), equalTo(resp.contentType()));\n }\n \n- private HttpResponse execRequestWithCors(final Settings settings, final String originValue) {\n+ private HttpResponse execRequestWithCors(final Settings settings, final String originValue, final String host) {\n // construct request and send it over the transport layer\n httpServerTransport = new NettyHttpServerTransport(settings, networkService, bigArrays);\n HttpRequest httpRequest = new TestHttpRequest();\n- httpRequest.headers().add(HttpHeaders.Names.ORIGIN, ORIGIN);\n+ httpRequest.headers().add(HttpHeaders.Names.ORIGIN, originValue);\n httpRequest.headers().add(HttpHeaders.Names.USER_AGENT, \"Mozilla fake\");\n+ httpRequest.headers().add(HttpHeaders.Names.HOST, host);\n WriteCapturingChannel writeCapturingChannel = new WriteCapturingChannel();\n NettyHttpRequest request = new NettyHttpRequest(httpRequest, writeCapturingChannel);\n ", "filename": "core/src/test/java/org/elasticsearch/http/netty/NettyHttpChannelTests.java", "status": "modified" } ] }
{ "body": "A somewhat rare but nevertheless problematic scenario can occur if a data folder containing a deleted index is copied over to a node. Imagine the following steps:\n1. We have a one node in the cluster, `A`\n2. `A` creates an index called `idx` with UUID `xyz`.\n3. The data directory for node `A` is copied to some external location.\n4. `idx` is deleted from the cluster.\n5. Start up a new node `B` in the cluster\n6. After `B` has started up, copy the data directory previous put in an external location to the data directory for `B`\n7. Start a new node `C`, just in order to trigger a cluster state update.\n\nAt this point, the previously deleted `idx` with UUID `xyz` is imported into the index as a dangling index, even though a tombstone exists for it in the cluster state. This leads to a very confusing situation where the index exists in the cluster state and at the same time, there is a tombstone for it. Also, when the node restarts, it will check the tombstones, see the index is deleted, so delete it - that means the index re-appeared in the cluster state for some time but will be deleted again upon node restart.\n\nAlso, note that this problem does not occur if the data directory is copied before `B` is started, because in that case on node startup, the tombstones are checked so the index is not permitted to be imported as dangling.\n", "comments": [], "number": 18249, "title": "Dangling Indices should not be imported if a tombstone exists for the same Index UUID" }
{ "body": "Dangling indices are not imported if a tombstone for the same index\n(same name and UUID) exists in the cluster state. This resolves a\nsituation where if an index data folder was copied into a node's data\ndirectory while the node is running and that index had a tombstone in\nthe cluster state, the index would still get imported.\n\nCloses #18249\n", "number": 18250, "review_comments": [ { "body": "can we just make this a unit test in DanglingIndicesStateTests ?\n", "created_at": "2016-05-11T10:15:29Z" }, { "body": "can we add a unit test for this?\n", "created_at": "2016-05-11T10:15:40Z" } ], "title": "Dangling indices are not imported if a tombstone for the index exists" }
{ "commits": [ { "message": "Dangling indices are not imported if a tombstone for the same index\n(same name and UUID) exists in the cluster state. This resolves a\nsituation where if an index data folder was copied into a node's data\ndirectory while the node is running and that index had a tombstone in\nthe cluster state, the index would still get imported.\n\nCloses #18250\nCloses #18249" } ], "files": [ { "diff": "@@ -364,7 +364,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]discovery[/\\\\]zen[/\\\\]publish[/\\\\]PublishClusterStateAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]env[/\\\\]ESFileStore.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]gateway[/\\\\]AsyncShardFetch.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]gateway[/\\\\]DanglingIndicesState.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]gateway[/\\\\]GatewayAllocator.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]gateway[/\\\\]GatewayMetaState.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]gateway[/\\\\]GatewayService.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -123,6 +123,18 @@ public List<Tombstone> getTombstones() {\n return tombstones;\n }\n \n+ /**\n+ * Returns true if the graveyard contains a tombstone for the given index.\n+ */\n+ public boolean containsIndex(final Index index) {\n+ for (Tombstone tombstone : tombstones) {\n+ if (tombstone.getIndex().equals(index)) {\n+ return true;\n+ }\n+ }\n+ return false;\n+ }\n+\n @Override\n public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException {\n builder.startArray(TOMBSTONES_FIELD.getPreferredName());", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.gateway;\n \n import com.carrotsearch.hppc.cursors.ObjectCursor;\n+import org.elasticsearch.cluster.metadata.IndexGraveyard;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.component.AbstractComponent;\n@@ -68,7 +69,7 @@ public DanglingIndicesState(Settings settings, NodeEnvironment nodeEnv, MetaStat\n * Process dangling indices based on the provided meta data, handling cleanup, finding\n * new dangling indices, and allocating outstanding ones.\n */\n- public void processDanglingIndices(MetaData metaData) {\n+ public void processDanglingIndices(final MetaData metaData) {\n if (nodeEnv.hasNodeFile() == false) {\n return;\n }\n@@ -107,7 +108,7 @@ void cleanupAllocatedDangledIndices(MetaData metaData) {\n * Finds (@{link #findNewAndAddDanglingIndices}) and adds the new dangling indices\n * to the currently tracked dangling indices.\n */\n- void findNewAndAddDanglingIndices(MetaData metaData) {\n+ void findNewAndAddDanglingIndices(final MetaData metaData) {\n danglingIndices.putAll(findNewDanglingIndices(metaData));\n }\n \n@@ -116,7 +117,7 @@ void findNewAndAddDanglingIndices(MetaData metaData) {\n * that have state on disk, but are not part of the provided meta data, or not detected\n * as dangled already.\n */\n- Map<Index, IndexMetaData> findNewDanglingIndices(MetaData metaData) {\n+ Map<Index, IndexMetaData> findNewDanglingIndices(final MetaData metaData) {\n final Set<String> excludeIndexPathIds = new HashSet<>(metaData.indices().size() + danglingIndices.size());\n for (ObjectCursor<IndexMetaData> cursor : metaData.indices().values()) {\n excludeIndexPathIds.add(cursor.value.getIndex().getUUID());\n@@ -125,13 +126,18 @@ Map<Index, IndexMetaData> findNewDanglingIndices(MetaData metaData) {\n try {\n final List<IndexMetaData> indexMetaDataList = metaStateService.loadIndicesStates(excludeIndexPathIds::contains);\n Map<Index, IndexMetaData> newIndices = new HashMap<>(indexMetaDataList.size());\n+ final IndexGraveyard graveyard = metaData.indexGraveyard();\n for (IndexMetaData indexMetaData : indexMetaDataList) {\n if (metaData.hasIndex(indexMetaData.getIndex().getName())) {\n logger.warn(\"[{}] can not be imported as a dangling index, as index with same name already exists in cluster metadata\",\n indexMetaData.getIndex());\n+ } else if (graveyard.containsIndex(indexMetaData.getIndex())) {\n+ logger.warn(\"[{}] can not be imported as a dangling index, as an index with the same name and UUID exist in the \" +\n+ \"index tombstones. This situation is likely caused by copying over the data directory for an index \" +\n+ \"that was previously deleted.\", indexMetaData.getIndex());\n } else {\n- logger.info(\"[{}] dangling index, exists on local file system, but not in cluster metadata, auto import to cluster state\",\n- indexMetaData.getIndex());\n+ logger.info(\"[{}] dangling index exists on local file system, but not in cluster metadata, \" +\n+ \"auto import to cluster state\", indexMetaData.getIndex());\n newIndices.put(indexMetaData.getIndex(), indexMetaData);\n }\n }\n@@ -151,17 +157,19 @@ private void allocateDanglingIndices() {\n return;\n }\n try {\n- allocateDangledIndices.allocateDangled(Collections.unmodifiableCollection(new ArrayList<>(danglingIndices.values())), new LocalAllocateDangledIndices.Listener() {\n- @Override\n- public void onResponse(LocalAllocateDangledIndices.AllocateDangledResponse response) {\n- logger.trace(\"allocated dangled\");\n- }\n+ allocateDangledIndices.allocateDangled(Collections.unmodifiableCollection(new ArrayList<>(danglingIndices.values())),\n+ new LocalAllocateDangledIndices.Listener() {\n+ @Override\n+ public void onResponse(LocalAllocateDangledIndices.AllocateDangledResponse response) {\n+ logger.trace(\"allocated dangled\");\n+ }\n \n- @Override\n- public void onFailure(Throwable e) {\n- logger.info(\"failed to send allocated dangled\", e);\n+ @Override\n+ public void onFailure(Throwable e) {\n+ logger.info(\"failed to send allocated dangled\", e);\n+ }\n }\n- });\n+ );\n } catch (Throwable e) {\n logger.warn(\"failed to send allocate dangled\", e);\n }", "filename": "core/src/main/java/org/elasticsearch/gateway/DanglingIndicesState.java", "status": "modified" }, { "diff": "@@ -527,7 +527,8 @@ public void deleteUnassignedIndex(String reason, IndexMetaData metaData, Cluster\n try {\n if (clusterState.metaData().hasIndex(indexName)) {\n final IndexMetaData index = clusterState.metaData().index(indexName);\n- throw new IllegalStateException(\"Can't delete unassigned index store for [\" + indexName + \"] - it's still part of the cluster state [\" + index.getIndexUUID() + \"] [\" + metaData.getIndexUUID() + \"]\");\n+ throw new IllegalStateException(\"Can't delete unassigned index store for [\" + indexName + \"] - it's still part of \" +\n+ \"the cluster state [\" + index.getIndexUUID() + \"] [\" + metaData.getIndexUUID() + \"]\");\n }\n deleteIndexStore(reason, metaData, clusterState);\n } catch (IOException e) {", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -130,6 +130,23 @@ public void testDiffs() {\n assertThat(diff.getRemovedCount(), equalTo(removals.size()));\n }\n \n+ public void testContains() {\n+ List<Index> indices = new ArrayList<>();\n+ final int numIndices = randomIntBetween(1, 5);\n+ for (int i = 0; i < numIndices; i++) {\n+ indices.add(new Index(\"idx-\" + i, UUIDs.randomBase64UUID()));\n+ }\n+ final IndexGraveyard.Builder graveyard = IndexGraveyard.builder();\n+ for (final Index index : indices) {\n+ graveyard.addTombstone(index);\n+ }\n+ final IndexGraveyard indexGraveyard = graveyard.build();\n+ for (final Index index : indices) {\n+ assertTrue(indexGraveyard.containsIndex(index));\n+ }\n+ assertFalse(indexGraveyard.containsIndex(new Index(randomAsciiOfLength(6), UUIDs.randomBase64UUID())));\n+ }\n+\n public static IndexGraveyard createRandom() {\n final IndexGraveyard.Builder graveyard = IndexGraveyard.builder();\n final int numTombstones = randomIntBetween(0, 4);", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/IndexGraveyardTests.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.gateway;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexGraveyard;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.settings.Settings;\n@@ -37,6 +38,7 @@\n /**\n */\n public class DanglingIndicesStateTests extends ESTestCase {\n+\n private static Settings indexSettings = Settings.builder()\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n@@ -139,4 +141,20 @@ public void testDanglingProcessing() throws Exception {\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n }\n+\n+ public void testDanglingIndicesNotImportedWhenTombstonePresent() throws Exception {\n+ try (NodeEnvironment env = newNodeEnvironment()) {\n+ MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n+ DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n+\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, \"test1UUID\");\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n+\n+ final IndexGraveyard graveyard = IndexGraveyard.builder().addTombstone(dangledIndex.getIndex()).build();\n+ final MetaData metaData = MetaData.builder().indexGraveyard(graveyard).build();\n+ assertThat(danglingState.findNewDanglingIndices(metaData).size(), equalTo(0));\n+\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/gateway/DanglingIndicesStateTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.2.0\n\n**JVM version**: OpenJDK Runtime Environment (IcedTea 2.6.4) (7u95-2.6.4-0ubuntu0.14.04.1)\nOpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)\n\n**OS version**: Ubuntu 14.04.1\n\n**Description of the problem including expected versus actual behavior**:\nWe had an existing API backed by Solr, which we later switched to an Elasticsearch backend. The search endpoint is called repeatedly by our clients with each character typed into a search box (i.e. queries look like `q=t`, `q=te`, `q=tes`, `q=test`, etc. We know about CompletionSuggester, we just haven't figured out how to work into our solution while maintaining backward compatibility with our existing clients).\n\nUntil recently we have been using `query_string` for simple keyword searches, appending a '*' to each term to give us a prefix search, and using `analyze_wildcard` (in spite of the performance implications) to still give us reasonable stemming and so on. Now we've added a new client to the portfolio with a larger and somewhat different user base, and the queries we've been seeing have been less forgiving on our simple query sanitisation, so we want to switch to `simple_query_string`. For the most part this is working well, but we've been seeing NullPointerExceptions when the query string consists purely of stop words.\n\n**Steps to reproduce**:\n #!/bin/bash\n\n```\nexport ELASTICSEARCH_ENDPOINT=\"http://localhost:9200\"\n\n# Create indexes\n\ncurl -XPUT \"$ELASTICSEARCH_ENDPOINT/play\" -d '{\n \"mappings\": {\n \"thing\": {\n \"properties\": {\n \"prop\": {\n \"type\": \"string\",\n \"analyzer\": \"stop\"\n }\n }\n }\n }\n}'\n\n# Index documents\ncurl -XPOST \"$ELASTICSEARCH_ENDPOINT/_bulk?refresh=true\" -d '\n{\"index\":{\"_index\":\"play\",\"_type\":\"thing\"}}\n{\"prop\":\"Some text\"}\n'\n\n# Do searches\n\ncurl -XPOST \"$ELASTICSEARCH_ENDPOINT/_search?pretty\" -d '\n{\n \"query\": {\n \"simple_query_string\": {\n \"query\": \"the*\",\n \"fields\": [\n \"prop\"\n ],\n \"analyze_wildcard\": true\n }\n }\n}\n'\n```\n\n**Provide logs (if relevant)**:\n\n```\n[2016-05-08 23:11:16,115][DEBUG][action.search.type ] [Aireo] [play][1], node[-bdul1V7QgKIWXKi-HTkgQ], [P], v[2], s[STARTED], a[id=_0myvoG0Q2yfqC6jIFCsgg]: Failed to execute [org.elasticsearch.action.search.SearchRequest@458bdcb7] lastShard [true]\nRemoteTransportException[[Aireo][127.0.0.1:9300][indices:data/read/search[phase/query]]]; nested: SearchParseException[failed to parse search source [\n{\n \"query\": {\n \"simple_query_string\": {\n \"query\": \"the*\",\n \"fields\": [\n \"prop\"\n ],\n \"analyze_wildcard\": true\n }\n }\n}\n]]; nested: NullPointerException;\nCaused by: SearchParseException[failed to parse search source [\n{\n \"query\": {\n \"simple_query_string\": {\n \"query\": \"the*\",\n \"fields\": [\n \"prop\"\n ],\n \"analyze_wildcard\": true\n }\n }\n}\n]]; nested: NullPointerException;\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:853)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:652)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:618)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:369)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.index.query.SimpleQueryParser.newPrefixQuery(SimpleQueryParser.java:131)\n at org.apache.lucene.queryparser.simple.SimpleQueryParser.consumeToken(SimpleQueryParser.java:406)\n at org.apache.lucene.queryparser.simple.SimpleQueryParser.parseSubQuery(SimpleQueryParser.java:212)\n at org.apache.lucene.queryparser.simple.SimpleQueryParser.parse(SimpleQueryParser.java:152)\n at org.elasticsearch.index.query.SimpleQueryStringParser.parse(SimpleQueryStringParser.java:212)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:256)\n at org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:303)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:206)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:201)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:836)\n ... 10 more\n\n```\n", "comments": [ { "body": "@dakrone could you look at this one please?\n", "created_at": "2016-05-09T07:45:38Z" }, { "body": "Okay, this is because the query after being analyzed becomes `null`, which it then tries to set a boost on and causes an NPE. I'll work on a fix for this, thanks @jamestait!\n", "created_at": "2016-05-09T22:48:19Z" }, { "body": "@jamestait pushed a fix for this that will be released in 2.3.3 and 2.4.0 (master already had a check for this)\n", "created_at": "2016-05-10T14:34:04Z" } ], "number": 18202, "title": "NullPointerException with StopFilter and simple_query_string with analyze_wildcard if the query contains only stop word prefixes" }
{ "body": "Resolves #18202\n\nThis is already fixed on Master/5.x, so this is just for the 2.x branches. I will forward-port the test though, always good to have more tests.\n", "number": 18243, "review_comments": [], "title": "Fix NullPointerException in SimpleQueryParser when analyzing text produces a null query" }
{ "commits": [ { "message": "Fix NullPointerException when analyzing text produces a null query\n\nResolves #18202" } ], "files": [ { "diff": "@@ -128,8 +128,10 @@ public Query newPrefixQuery(String text) {\n try {\n if (settings.analyzeWildcard()) {\n Query analyzedQuery = newPossiblyAnalyzedQuery(entry.getKey(), text);\n- analyzedQuery.setBoost(entry.getValue());\n- bq.add(analyzedQuery, BooleanClause.Occur.SHOULD);\n+ if (analyzedQuery != null) {\n+ analyzedQuery.setBoost(entry.getValue());\n+ bq.add(analyzedQuery, BooleanClause.Occur.SHOULD);\n+ }\n } else {\n PrefixQuery prefix = new PrefixQuery(new Term(entry.getKey(), text));\n prefix.setBoost(entry.getValue());", "filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java", "status": "modified" }, { "diff": "@@ -346,4 +346,31 @@ public void testSimpleQueryStringAnalyzeWildcard() throws ExecutionException, In\n assertSearchHits(searchResponse, \"1\");\n }\n \n+ @Test\n+ public void testEmptySimpleQueryStringWithAnalysis() throws Exception {\n+ // https://github.com/elastic/elasticsearch/issues/18202\n+ String mapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"body\")\n+ .field(\"type\", \"string\")\n+ .field(\"analyzer\", \"stop\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().string();\n+\n+ CreateIndexRequestBuilder mappingRequest = client().admin().indices()\n+ .prepareCreate(\"test1\")\n+ .addMapping(\"type1\", mapping);\n+ mappingRequest.execute().actionGet();\n+ indexRandom(true, client().prepareIndex(\"test1\", \"type1\", \"1\").setSource(\"body\", \"Some Text\"));\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(simpleQueryStringQuery(\"the*\").analyzeWildcard(true).field(\"body\")).get();\n+ assertNoFailures(searchResponse);\n+ assertHitCount(searchResponse, 0l);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: alpha2\n\n**JVM version**: build 1.8.0_74-b02\n\n**OS version**: OS X El Capitan 10.11.3 \n\n**Description of the problem including expected versus actual behavior**:\n\n**Steps to reproduce**:\n1. Install and run elasticsearch-alpha2, topbeat-alpha2, and kibana-alpha2 (Topbeat is only monitoring the node process on a 20 second interval.)\n2. I am using Kibana to monitor a node process. Here is the query Kibana is using to generate the visualization:\n \n ``` javascript\n {\n \"size\":0,\n \"aggs\":{\n \"2\":{\n \"date_histogram\":{\n \"field\":\"@timestamp\",\n \"interval\":\"30s\",\n \"time_zone\":\"America/Los_Angeles\",\n \"min_doc_count\":1\n },\n \"aggs\":{\n \"1\":{\n \"max\":{\n \"field\":\"proc.mem.rss\"\n }\n }\n }\n }\n },\n \"highlight\":{\n \"pre_tags\":[\n \"@kibana-highlighted-field@\"\n ],\n \"post_tags\":[\n \"@/kibana-highlighted-field@\"\n ],\n \"fields\":{\n \"*\":{\n \n }\n },\n \"require_field_match\":false,\n \"fragment_size\":2147483647\n },\n \"query\":{\n \"bool\":{\n \"must\":[\n {\n \"query_string\":{\n \"query\":\"*\",\n \"analyze_wildcard\":true\n }\n },\n {\n \"match\":{\n \"proc.cmdline\":{\n \"query\":\"/Users/tyler/.nvm/versions/node/v4.4.3/bin/node ./bin/../src/cli\",\n \"type\":\"phrase\"\n }\n }\n },\n {\n \"range\":{\n \"@timestamp\":{\n \"gte\":1462389453519,\n \"lte\":1462390353519,\n \"format\":\"epoch_millis\"\n }\n }\n }\n ],\n \"must_not\":[\n \n ]\n }\n }\n }\n ```\n3. Run for about 30 minutes.\n\nI have 1027 documents, and the total size is 1.6MB.\n\n**Provide logs (if relevant)**:\n\n``` bash\nCircuitBreakingException[[parent] Data too large, data for [<http_request>] would be larger than limit of [726571417/692.9mb]]\n at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:211)\n at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:128)\n at org.elasticsearch.http.HttpServer.dispatchRequest(HttpServer.java:109)\n at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:489)\n at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:65)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:85)\n at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:83)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nEventually, all requests to ES will fail with this exception including _stats.\n", "comments": [ { "body": "\"fun\"\n\n> data for [<http_request>] would be larger than limit of [726571417/692.9mb]\n\n@danielmitterdorfer this might be you though it is hard to tell.\n\n@tylersmalley is there any chance you can take a thread dump when this happens? Maybe just the hot_threads API (though it might not work because of the breaker)?\n", "created_at": "2016-05-04T19:54:45Z" }, { "body": "@nik9000, I will get that once it returns to a failed state again.\n", "created_at": "2016-05-04T20:00:59Z" }, { "body": "> @nik9000, I will get that once it returns to a failed state again.\n\nThanks!\n", "created_at": "2016-05-04T20:03:16Z" }, { "body": "The _msearch requests will begin failing before the entire ES cluster. Nothing ever appeared in `/_nodes/hot_threads` and eventually it would fail with the same exception:\n\n``` json\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"circuit_breaking_exception\",\n \"reason\" : \"[parent] Data too large, data for [<http_request>] would be larger than limit of [726571417/692.9mb]\",\n \"bytes_wanted\" : 726582240,\n \"bytes_limit\" : 726571417\n } ],\n \"type\" : \"circuit_breaking_exception\",\n \"reason\" : \"[parent] Data too large, data for [<http_request>] would be larger than limit of [726571417/692.9mb]\",\n \"bytes_wanted\" : 726582240,\n \"bytes_limit\" : 726571417\n },\n \"status\" : 503\n}\n```\n\nHere is the thread dump: https://gist.githubusercontent.com/tylersmalley/00105a27a0dd7b86016d78dc65e1bfb1/raw/jstack_7647_2.log\n\nI will keep the cluster in a failed state should you need any additional information from it.\n", "created_at": "2016-05-04T20:56:15Z" }, { "body": "> Here is the thread dump\n\nIt says \"I'm not doing anything\". Any chance you can get a [task list](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html)? `curl localhost:9200/_tasks` should do it. That is another (new and shiny!) way for me to figure out what is going on.\n\nThe breaker you are hitting is trying to prevent requests from overwhelming memory. If you had in flight requests I should have seen them doing something in the thread dump. Lots of stuff in Elasticsearch is async so I wouldn't see everything but I expected something. The task list goes the other way - it registers something whenever a request starts and removes it when it stops. If we see something in the task list, especially if it is a lot of somethings, then we have our smoking gun. If we see nothing, well, we go look other places.\n\nThe next place might be getting a heap dump. But I'm not going to put you through that. I should be able to reproduce this on my side.\n\nI believe @danielmitterdorfer, who I pinged, will not be around tomorrow so I might just keep this issue.\n", "created_at": "2016-05-04T21:19:20Z" }, { "body": "> Any chance you can get a task list?\n\n``` bash\ncurl 'http://localhost:9200/_tasks?pretty=true'\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"circuit_breaking_exception\",\n \"reason\" : \"[parent] Data too large, data for [<http_request>] would be larger than limit of [726571417/692.9mb]\",\n \"bytes_wanted\" : 726582240,\n \"bytes_limit\" : 726571417\n } ],\n \"type\" : \"circuit_breaking_exception\",\n \"reason\" : \"[parent] Data too large, data for [<http_request>] would be larger than limit of [726571417/692.9mb]\",\n \"bytes_wanted\" : 726582240,\n \"bytes_limit\" : 726571417\n },\n \"status\" : 503\n}\n```\n\nI will restart the cluster and monitor the `_tasks` endpoint until it begins throwing exceptions and report back.\n\nHere is a heap dump in its current failed state: https://gist.github.com/tylersmalley/00105a27a0dd7b86016d78dc65e1bfb1/raw/jmap_7647.bin\n", "created_at": "2016-05-04T21:31:53Z" }, { "body": "@nik9000 I tried to reproduce the scenario locally by running topbeat and running the query above periodically but so far the circuit breaker did not trip. I am not surprised that the thread dump does not reveal much because the circuit breaker essentially prevents further work from coming into the system. Based on an analysis of the heap dump I guess that the system is not really busy but the bytes are not freed properly and add up over time. I had a closer look at how the bytes are freed in `HttpServer.ResourceHandlingHttpChannel`:\n\n``` java\ninFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(-request().content().length());\n```\n\nConsidering that the content is represented by a `ChannelBufferBytesReference` which returns the readable bytes in the channel buffer we could end up in a situation where we reserve more bytes than we free (as the readable bytes could change over time). But this is only a theory and I was not able to observe this behavior. If this were the case, the fix is to simply provide the number of reserved bytes to `HttpServer.ResourceHandlingHttpChannel` but I am somewhat reluctant to do a fix without being able to reproduce it.\n", "created_at": "2016-05-05T12:09:59Z" }, { "body": "I have also installed Kibana 5.0.0-alpha2, imported the dashboard from topbeat, opened it and set it to auto-refresh every 5 seconds. I could just see that the request breaker (which is used by `BigArrays`) is slowly increasing (a few MB after a few minutes). The inflight requests breaker always resets to zero. I think it just happens to be the victim as it's trying very early during request processing to reserve an amount of bytes (we can also see from the stack trace that actually the parent circuit breaker is tripping, not the inflight requests breaker).\n\nSo I followed the respective `close()` calls that are supposed to free the reserved bytes but there are a lot of places to follow. @dakrone: Is it intended that the number of reserved bytes in the request circuit breaker grow over time as `BigArrays` seems to be intended as some kind of pool or should the reserved number of bytes in `BigArrays` be zero after a request has been processed?\n", "created_at": "2016-05-05T13:29:15Z" }, { "body": "@danielmitterdorfer in my testing the request circuit breaker (backing BigArrays) has always reset to 0 if there are no requests\n\nYou should be able to turn on TRACE logging for the `org.elasticsearch.indices.breaker` package and see _all_ increments and decrements to the breakers (note this is very verbose)\n", "created_at": "2016-05-05T14:28:30Z" }, { "body": "@dakrone Thanks for the hint. I'll check that.\n", "created_at": "2016-05-05T14:38:44Z" }, { "body": "@danielmitterdorfer I believe to have found what was causing this on my end, but unsure if it should have triggered the the CircuitBreaker. While doing other testing I still had a script running which hit the `cluster/_health` endpoint, paused for 100ms, then repeated. I am fairly certain this was not an issue in 2.3, but I can double check.\n", "created_at": "2016-05-05T16:08:42Z" }, { "body": "@tylersmalley Even that should not trip any circuit breaker so we definitely need to investigate. If you can shed more light on how we can reproduce it, that's great.\n", "created_at": "2016-05-05T16:12:27Z" }, { "body": "I was able to reproduce this also on 5.0.0-alpha2 with x-pack installed and kibana hitting the node. Just like @danielmitterdorfer said, the request breaker is increasing very slowly, it looks like there is a leak.\n\nI also tried setting `network.breaker.inflight_requests.overhead: 0.0` and it looks like it is not being taken into account in at least one place (still having bytes added over time instead of all in_flight_request additions being 0)\n", "created_at": "2016-05-05T17:59:42Z" }, { "body": "@danielmitterdorfer here is the node script I have to preform the health requests on ES. In it I added a second check to run in parallel for speed up the fault.\n\nhttps://gist.githubusercontent.com/tylersmalley/00105a27a0dd7b86016d78dc65e1bfb1/raw/test.js\n", "created_at": "2016-05-05T20:40:41Z" }, { "body": "This reproduces pretty easily now, building from master (or 5.0.0-alpha2), simple turn on logging and then run Kibana, the periodic health check that kibana does causing it to increase over time.\n", "created_at": "2016-05-05T20:55:56Z" }, { "body": "@dakrone I can reproduce the increase now too but the problem is _not_ the `in_flights_request` breaker but the `request` breaker that keeps increasing. Nevertheless, I'll investigate what's going on.\n", "created_at": "2016-05-09T04:54:03Z" }, { "body": "I have investigated and now have a minimal reproduction scenario: `curl -XHEAD http://localhost:9200/`\n\nThe problem is that a `XContentBuilder` is created which is backed by a `BigArrays` instance but then we use a constructor of `BytesRestResponse` without a builder. After that we lose track of the `BigArrays` instance and don't free it. This affects at least `RestMainAction` and probably other actions too. I am now working on a fix.\n", "created_at": "2016-05-09T08:24:55Z" }, { "body": "I have also checked 2.x. It is not affected as the code is structured differently there.\n", "created_at": "2016-05-09T09:28:29Z" }, { "body": "@tylersmalley The problem is fixed now and the fix will be included in the next release of the 5.0 series. Thanks for reporting and helping on the reproduction. Much appreciated!\n", "created_at": "2016-05-09T13:13:37Z" }, { "body": "Great, thanks @danielmitterdorfer! \n", "created_at": "2016-05-09T18:03:13Z" }, { "body": "@dakrone I also checked why this happens:\n\n> I also tried setting network.breaker.inflight_requests.overhead: 0.0 and it looks like it is not being taken into account in at least one place (still having bytes added over time instead of all in_flight_request additions being 0)\n\nIt's caused by the implementation of [`ChildMemoryCircuitBreaker#limit()](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/common/breaker/ChildMemoryCircuitBreaker.java#L149-L176). As far as I can see the overhead is only taken into account for logging statements but never for actual limiting. To me this does not sound that it's intended that way.\n", "created_at": "2016-05-10T12:26:23Z" }, { "body": "@danielmitterdorfer the overhead is taken into account also when comparing against the limit:\n\n``` java\nif (memoryBytesLimit > 0 && newUsedWithOverhead > memoryBytesLimit) {\n ....\n}\n```\n\nI remember it correctly now (I was misinterpreting what a feature I added did, doh!), the overhead is only for tweaking the estimation of an addition, not to factor into the total at all. This is because the fielddata circuit breaker estimates the amount of memory used but ultimately adjusts with the exact value used, so it should not add the overhead-modified usage, but the actual usage. Only the overhead is used for the _per-addition_ check.\n\nHopefully that clarifies, I was slightly confusing myself there too assuming it was taken into account with the added total amount for the breaker, but the current behavior is correct.\n", "created_at": "2016-05-10T15:19:34Z" }, { "body": "@dakrone Ah, right. I missed this line... . Thanks for the explanation. Maybe we should add a comment in the code so the next time it comes up we don't have to dig to find this in the ticket again. :) With that explanation I am not sure whether any circuit breaker except the field data circuit breaker should have a user-defined overhead at all. Wdyt?\n", "created_at": "2016-05-11T07:19:13Z" } ], "number": 18144, "title": "CircuitBreakingException on extremely small dataset" }
{ "body": "With this commit we free all bytes reserved on the request circuit breaker.\n\nCloses #18144\n", "number": 18204, "review_comments": [], "title": "Free bytes reserved on request breaker" }
{ "commits": [ { "message": "Free bytes reserved on request breaker\n\nWith this commit we free all bytes reserved on the request\ncircuit breaker.\n\nCloses #18144" } ], "files": [ { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.breaker.CircuitBreaker;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n@@ -125,10 +126,10 @@ void handleFavicon(RestRequest request, RestChannel channel) {\n channel.sendResponse(restResponse);\n }\n } catch (IOException e) {\n- channel.sendResponse(new BytesRestResponse(INTERNAL_SERVER_ERROR));\n+ channel.sendResponse(new BytesRestResponse(INTERNAL_SERVER_ERROR, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY));\n }\n } else {\n- channel.sendResponse(new BytesRestResponse(FORBIDDEN));\n+ channel.sendResponse(new BytesRestResponse(FORBIDDEN, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY));\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/http/HttpServer.java", "status": "modified" }, { "diff": "@@ -40,10 +40,6 @@ public class BytesRestResponse extends RestResponse {\n private final BytesReference content;\n private final String contentType;\n \n- public BytesRestResponse(RestStatus status) {\n- this(status, TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n- }\n-\n /**\n * Creates a new response based on {@link XContentBuilder}.\n */", "filename": "core/src/main/java/org/elasticsearch/rest/BytesRestResponse.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.rest;\n \n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.path.PathTrie;\n@@ -215,9 +216,10 @@ void executeHandler(RestRequest request, RestChannel channel) throws Exception {\n } else {\n if (request.method() == RestRequest.Method.OPTIONS) {\n // when we have OPTIONS request, simply send OK by default (with the Access Control Origin header which gets automatically added)\n- channel.sendResponse(new BytesRestResponse(OK));\n+ channel.sendResponse(new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY));\n } else {\n- channel.sendResponse(new BytesRestResponse(BAD_REQUEST, \"No handler found for uri [\" + request.uri() + \"] and method [\" + request.method() + \"]\"));\n+ final String msg = \"No handler found for uri [\" + request.uri() + \"] and method [\" + request.method() + \"]\";\n+ channel.sendResponse(new BytesRestResponse(BAD_REQUEST, msg));\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/rest/RestController.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.action.admin.cluster.allocation.ClusterAllocationExplainResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.client.Requests;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -67,7 +68,8 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n req = ClusterAllocationExplainRequest.parse(parser);\n } catch (IOException e) {\n logger.debug(\"failed to parse allocation explain request\", e);\n- channel.sendResponse(new BytesRestResponse(ExceptionsHelper.status(e)));\n+ channel.sendResponse(\n+ new BytesRestResponse(ExceptionsHelper.status(e), BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY));\n return;\n }\n }\n@@ -83,7 +85,7 @@ public RestResponse buildResponse(ClusterAllocationExplainResponse response, XCo\n });\n } catch (Exception e) {\n logger.error(\"failed to explain allocation\", e);\n- channel.sendResponse(new BytesRestResponse(ExceptionsHelper.status(e)));\n+ channel.sendResponse(new BytesRestResponse(ExceptionsHelper.status(e), BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY));\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/allocation/RestClusterAllocationExplainAction.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.rest.BaseRestHandler;\n@@ -65,9 +66,9 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n public void onResponse(AliasesExistResponse response) {\n try {\n if (response.isExists()) {\n- channel.sendResponse(new BytesRestResponse(OK));\n+ channel.sendResponse(new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY));\n } else {\n- channel.sendResponse(new BytesRestResponse(NOT_FOUND));\n+ channel.sendResponse(new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY));\n }\n } catch (Throwable e) {\n onFailure(e);\n@@ -77,7 +78,8 @@ public void onResponse(AliasesExistResponse response) {\n @Override\n public void onFailure(Throwable e) {\n try {\n- channel.sendResponse(new BytesRestResponse(ExceptionsHelper.status(e)));\n+ channel.sendResponse(\n+ new BytesRestResponse(ExceptionsHelper.status(e), BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY));\n } catch (Exception e1) {\n logger.error(\"Failed to send failure response\", e1);\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/alias/head/RestAliasesExistAction.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.rest.BaseRestHandler;\n@@ -58,9 +59,9 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n @Override\n public RestResponse buildResponse(IndicesExistsResponse response) {\n if (response.isExists()) {\n- return new BytesRestResponse(OK);\n+ return new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n } else {\n- return new BytesRestResponse(NOT_FOUND);\n+ return new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/exists/indices/RestIndicesExistsAction.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.rest.BaseRestHandler;\n@@ -59,9 +60,9 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n @Override\n public RestResponse buildResponse(TypesExistsResponse response) throws Exception {\n if (response.isExists()) {\n- return new BytesRestResponse(OK);\n+ return new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n } else {\n- return new BytesRestResponse(NOT_FOUND);\n+ return new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n }\n }\n });", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/exists/types/RestTypesExistsAction.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesRequest;\n import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse;\n import org.elasticsearch.client.Client;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.rest.BaseRestHandler;\n@@ -57,9 +58,9 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n public RestResponse buildResponse(GetIndexTemplatesResponse getIndexTemplatesResponse) {\n boolean templateExists = getIndexTemplatesResponse.getIndexTemplates().size() > 0;\n if (templateExists) {\n- return new BytesRestResponse(OK);\n+ return new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n } else {\n- return new BytesRestResponse(NOT_FOUND);\n+ return new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n }\n }\n });", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/template/head/RestHeadIndexTemplateAction.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.rest.BaseRestHandler;\n@@ -66,9 +67,9 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n @Override\n public RestResponse buildResponse(GetResponse response) {\n if (!response.isExists()) {\n- return new BytesRestResponse(NOT_FOUND);\n+ return new BytesRestResponse(NOT_FOUND, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n } else {\n- return new BytesRestResponse(OK);\n+ return new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);\n }\n }\n });", "filename": "core/src/main/java/org/elasticsearch/rest/action/get/RestHeadAction.java", "status": "modified" }, { "diff": "@@ -25,7 +25,6 @@\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.rest.BaseRestHandler;\n import org.elasticsearch.rest.BytesRestResponse;\n@@ -66,7 +65,7 @@ public RestResponse buildResponse(MainResponse mainResponse, XContentBuilder bui\n static BytesRestResponse convertMainResponse(MainResponse response, RestRequest request, XContentBuilder builder) throws IOException {\n RestStatus status = response.isAvailable() ? RestStatus.OK : RestStatus.SERVICE_UNAVAILABLE;\n if (request.method() == RestRequest.Method.HEAD) {\n- return new BytesRestResponse(status);\n+ return new BytesRestResponse(status, builder);\n }\n \n // Default to pretty printing, but allow ?pretty=false to disable", "filename": "core/src/main/java/org/elasticsearch/rest/action/main/RestMainAction.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.breaker.CircuitBreaker;\n import org.elasticsearch.common.bytes.ByteBufferBytesReference;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.settings.ClusterSettings;\n@@ -66,7 +67,8 @@ public void setup() {\n HttpServerTransport httpServerTransport = new TestHttpServerTransport();\n RestController restController = new RestController(settings);\n restController.registerHandler(RestRequest.Method.GET, \"/\",\n- (request, channel) -> channel.sendResponse(new BytesRestResponse(RestStatus.OK)));\n+ (request, channel) -> channel.sendResponse(\n+ new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)));\n restController.registerHandler(RestRequest.Method.GET, \"/error\", (request, channel) -> {\n throw new IllegalArgumentException(\"test error\");\n });", "filename": "core/src/test/java/org/elasticsearch/http/HttpServerTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,52 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.rest.action.main;\n+\n+import org.elasticsearch.common.network.NetworkModule;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.rest.client.http.HttpResponse;\n+\n+import java.io.IOException;\n+\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.nullValue;\n+\n+public class RestMainActionIT extends ESIntegTestCase {\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return Settings.builder()\n+ .put(super.nodeSettings(nodeOrdinal))\n+ .put(NetworkModule.HTTP_ENABLED.getKey(), true)\n+ .build();\n+ }\n+\n+ public void testHeadRequest() throws IOException {\n+ final HttpResponse response = httpClient().method(\"HEAD\").path(\"/\").execute();\n+ assertThat(response.getStatusCode(), equalTo(200));\n+ assertThat(response.getBody(), nullValue());\n+ }\n+\n+ public void testGetRequest() throws IOException {\n+ final HttpResponse response = httpClient().path(\"/\").execute();\n+ assertThat(response.getStatusCode(), equalTo(200));\n+ assertThat(response.getBody(), containsString(\"cluster_name\"));\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/rest/action/main/RestMainActionIT.java", "status": "added" } ] }
{ "body": "**Elasticsearch version**: 5.0.0-alpha2\n**Description of the problem including expected versus actual behavior**: NPE is thrown when calling non existing ingest pipeline. I would expect a better message like \"Pipeline [{}] does not exist\".\n\n**Steps to reproduce**:\n\n``` sh\ncurl -XDELETE \"localhost:9200/_ingest/pipeline/doesnotexist?pretty\"\ncurl -XPOST \"localhost:9200/_ingest/pipeline/doesnotexist/_simulate?pretty\" -d '{\n \"docs\": [ {\n \"_index\": \"index\",\n \"_type\": \"type\",\n \"_id\": \"id\",\n \"_source\": {\n \"foo\" : \"baz\"\n }\n } ]\n}'\n```\n\nGives back:\n\n``` json\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"null_pointer_exception\",\n \"reason\" : null\n } ],\n \"type\" : \"null_pointer_exception\",\n \"reason\" : null\n },\n \"status\" : 500\n}\n```\n\n**Provide logs (if relevant)**:\n\n```\n[2016-05-04 16:55:54,814][WARN ][rest.suppressed ] /_ingest/pipeline/doesnotexist/_simulate Params: {pretty=, id=doesnotexist}\njava.lang.NullPointerException\n at org.elasticsearch.action.ingest.SimulateExecutionService$1.doRun(SimulateExecutionService.java:72)\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:452)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n", "comments": [ { "body": "@talevy could you take a look please?\n", "created_at": "2016-05-04T15:17:28Z" }, { "body": "yup!\n", "created_at": "2016-05-04T16:01:35Z" }, { "body": "this should make for a nicer exception ^^\n", "created_at": "2016-05-06T20:30:46Z" } ], "number": 18139, "title": "NPE when calling non existing ingest pipeline" }
{ "body": "Instead of receiving a\n\n```\n{\n\"type\" : \"null_pointer_exception\",\n\"reason\" : null\n}\n```\n\nyou now receive a more detailed error:\n\n```\n{\n\"type\" : \"illegal_argument_exception\",\n\"reason\" : \"pipeline [<PIPELINE_ID>] does not exist\"\n}\n```\n\nfixes #18139\n", "number": 18190, "review_comments": [], "title": "add check for non-existent pipelines provided to simulate requests" }
{ "commits": [ { "message": "add check for non-existent pipelines provided to simulate requests\n\nfixes #18139" } ], "files": [ { "diff": "@@ -132,6 +132,9 @@ static Parsed parseWithPipelineId(String pipelineId, Map<String, Object> config,\n throw new IllegalArgumentException(\"param [pipeline] is null\");\n }\n Pipeline pipeline = pipelineStore.get(pipelineId);\n+ if (pipeline == null) {\n+ throw new IllegalArgumentException(\"pipeline [\" + pipelineId + \"] does not exist\");\n+ }\n List<IngestDocument> ingestDocumentList = parseDocs(config);\n return new Parsed(pipeline, ingestDocumentList, verbose);\n }", "filename": "core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java", "status": "modified" }, { "diff": "@@ -39,6 +39,7 @@\n import java.util.Map;\n \n import static org.elasticsearch.action.ingest.SimulatePipelineRequest.Fields;\n+import static org.elasticsearch.action.ingest.SimulatePipelineRequest.SIMULATED_PIPELINE_ID;\n import static org.elasticsearch.ingest.core.IngestDocument.MetaData.ID;\n import static org.elasticsearch.ingest.core.IngestDocument.MetaData.INDEX;\n import static org.elasticsearch.ingest.core.IngestDocument.MetaData.TYPE;\n@@ -55,12 +56,12 @@ public class SimulatePipelineRequestParsingTests extends ESTestCase {\n public void init() throws IOException {\n TestProcessor processor = new TestProcessor(ingestDocument -> {});\n CompoundProcessor pipelineCompoundProcessor = new CompoundProcessor(processor);\n- Pipeline pipeline = new Pipeline(SimulatePipelineRequest.SIMULATED_PIPELINE_ID, null, pipelineCompoundProcessor);\n+ Pipeline pipeline = new Pipeline(SIMULATED_PIPELINE_ID, null, pipelineCompoundProcessor);\n ProcessorsRegistry.Builder processorRegistryBuilder = new ProcessorsRegistry.Builder();\n processorRegistryBuilder.registerProcessor(\"mock_processor\", ((templateService, registry) -> mock(Processor.Factory.class)));\n ProcessorsRegistry processorRegistry = processorRegistryBuilder.build(TestTemplateService.instance());\n store = mock(PipelineStore.class);\n- when(store.get(SimulatePipelineRequest.SIMULATED_PIPELINE_ID)).thenReturn(pipeline);\n+ when(store.get(SIMULATED_PIPELINE_ID)).thenReturn(pipeline);\n when(store.getProcessorRegistry()).thenReturn(processorRegistry);\n }\n \n@@ -91,7 +92,7 @@ public void testParseUsingPipelineStore() throws Exception {\n expectedDocs.add(expectedDoc);\n }\n \n- SimulatePipelineRequest.Parsed actualRequest = SimulatePipelineRequest.parseWithPipelineId(SimulatePipelineRequest.SIMULATED_PIPELINE_ID, requestContent, false, store);\n+ SimulatePipelineRequest.Parsed actualRequest = SimulatePipelineRequest.parseWithPipelineId(SIMULATED_PIPELINE_ID, requestContent, false, store);\n assertThat(actualRequest.isVerbose(), equalTo(false));\n assertThat(actualRequest.getDocuments().size(), equalTo(numDocs));\n Iterator<Map<String, Object>> expectedDocsIterator = expectedDocs.iterator();\n@@ -104,7 +105,7 @@ public void testParseUsingPipelineStore() throws Exception {\n assertThat(ingestDocument.getSourceAndMetadata(), equalTo(expectedDocument.get(Fields.SOURCE)));\n }\n \n- assertThat(actualRequest.getPipeline().getId(), equalTo(SimulatePipelineRequest.SIMULATED_PIPELINE_ID));\n+ assertThat(actualRequest.getPipeline().getId(), equalTo(SIMULATED_PIPELINE_ID));\n assertThat(actualRequest.getPipeline().getDescription(), nullValue());\n assertThat(actualRequest.getPipeline().getProcessors().size(), equalTo(1));\n }\n@@ -177,8 +178,27 @@ public void testParseWithProvidedPipeline() throws Exception {\n assertThat(ingestDocument.getSourceAndMetadata(), equalTo(expectedDocument.get(Fields.SOURCE)));\n }\n \n- assertThat(actualRequest.getPipeline().getId(), equalTo(SimulatePipelineRequest.SIMULATED_PIPELINE_ID));\n+ assertThat(actualRequest.getPipeline().getId(), equalTo(SIMULATED_PIPELINE_ID));\n assertThat(actualRequest.getPipeline().getDescription(), nullValue());\n assertThat(actualRequest.getPipeline().getProcessors().size(), equalTo(numProcessors));\n }\n+\n+ public void testNullPipelineId() {\n+ Map<String, Object> requestContent = new HashMap<>();\n+ List<Map<String, Object>> docs = new ArrayList<>();\n+ requestContent.put(Fields.DOCS, docs);\n+ Exception e = expectThrows(IllegalArgumentException.class,\n+ () -> SimulatePipelineRequest.parseWithPipelineId(null, requestContent, false, store));\n+ assertThat(e.getMessage(), equalTo(\"param [pipeline] is null\"));\n+ }\n+\n+ public void testNonExistentPipelineId() {\n+ String pipelineId = randomAsciiOfLengthBetween(1, 10);\n+ Map<String, Object> requestContent = new HashMap<>();\n+ List<Map<String, Object>> docs = new ArrayList<>();\n+ requestContent.put(Fields.DOCS, docs);\n+ Exception e = expectThrows(IllegalArgumentException.class,\n+ () -> SimulatePipelineRequest.parseWithPipelineId(pipelineId, requestContent, false, store));\n+ assertThat(e.getMessage(), equalTo(\"pipeline [\" + pipelineId + \"] does not exist\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/action/ingest/SimulatePipelineRequestParsingTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**:\n5.0 Alpha 2\n**JVM version**:\nopenjdk version \"1.8.0_91\"\n**OS version**:\nCentos 7\n**Description of the problem including expected versus actual behavior**:\ninplace upgrade using 64bit rpm (e.g. rpm -U elasticsearch-5.0.0-alpha2.rpm) overwrites the elasticsearch.yml file . Expect it to create an rpmnew file. \n**Steps to reproduce**:\n 1.install alpha 1\n 2.edit elasticsearch.yml\n 3.upgrade with rpm -U elasticsearch-5.0.0-alpha2.rpm\n\n**Provide logs (if relevant)**:\n", "comments": [ { "body": "This is the expected behavior in your circumstance. When `rpm -U` is invoked, there are different scenarios. The scenarios come from there being three possible config files: the original config file distributed with alpha1, your config file with edits, and the config file that is distributed with alpha2. In this situation, you have alpha1 = x, your config = y and it turns out that alpha2 = z (there was a change to the shipped config file between alpha1 and alpha2). In this case, rpm assumes that z must be used with the new package (it can not safely assume that either x or y are safe to use with the new package). That is why it will not produce an `rpmnew` here.\n", "created_at": "2016-05-05T15:52:06Z" }, { "body": "Would it not be better then to produce an rpmold rather than entirely lose the configuration?\n", "created_at": "2016-05-05T15:56:19Z" }, { "body": "> Would it not be better then to produce an rpmold rather than entirely lose the configuration?\n\nIt should have produced an `rpmsave`. Are you saying that it did not? If not, that is a bug that we should fix.\n", "created_at": "2016-05-05T15:59:05Z" }, { "body": "It indeed did not\n", "created_at": "2016-05-05T16:01:57Z" }, { "body": "> It indeed did not\n\nYes, I just reproduced this as well. Thanks for reporting.\n", "created_at": "2016-05-05T16:04:18Z" }, { "body": "@Martin-Logan I've marked you as eligible for the [Pioneer Program](https://www.elastic.co/blog/elastic-pioneer-program) and opened #18188.\n", "created_at": "2016-05-06T15:40:40Z" } ], "number": 18158, "title": "rpm -U deletes elasticsearch.yml" }
{ "body": "This commit modifies the packaging for the RPM package so that edits to\nconfig files will not get lost during removal and upgrade.\n\nCloses #18158\n", "number": 18188, "review_comments": [], "title": "Preserve config files from RPM install" }
{ "commits": [ { "message": "Preserve config files from RPM install\n\nThis commit modifies the packaging for the RPM package so that edits to\nconfig files will not get lost during removal and upgrade." } ], "files": [ { "diff": "@@ -322,34 +322,39 @@ configure(subprojects.findAll { ['deb', 'rpm'].contains(it.name) }) {\n configurationFile '/etc/elasticsearch/elasticsearch.yml'\n configurationFile '/etc/elasticsearch/jvm.options'\n configurationFile '/etc/elasticsearch/logging.yml'\n- into('/etc') {\n- from \"${packagingFiles}/etc\"\n+ into('/etc/elasticsearch') {\n fileMode 0750\n permissionGroup 'elasticsearch'\n includeEmptyDirs true\n createDirectoryEntry true\n+ fileType CONFIG | NOREPLACE\n+ from \"${packagingFiles}/etc/elasticsearch\"\n }\n \n into('/usr/lib/tmpfiles.d') {\n from \"${packagingFiles}/systemd/elasticsearch.conf\"\n }\n configurationFile '/usr/lib/systemd/system/elasticsearch.service'\n into('/usr/lib/systemd/system') {\n+ fileType CONFIG | NOREPLACE\n from \"${packagingFiles}/systemd/elasticsearch.service\"\n }\n into('/usr/lib/sysctl.d') {\n+ fileType CONFIG | NOREPLACE\n from \"${packagingFiles}/systemd/sysctl/elasticsearch.conf\"\n }\n configurationFile '/etc/init.d/elasticsearch'\n into('/etc/init.d') {\n- from \"${packagingFiles}/init.d/elasticsearch\"\n fileMode 0755\n+ fileType CONFIG | NOREPLACE\n+ from \"${packagingFiles}/init.d/elasticsearch\"\n }\n configurationFile project.expansions['path.env']\n into(new File(project.expansions['path.env']).getParent()) {\n- from \"${project.packagingFiles}/env/elasticsearch\"\n fileMode 0644\n dirMode 0755\n+ fileType CONFIG | NOREPLACE\n+ from \"${project.packagingFiles}/env/elasticsearch\"\n }\n \n /**", "filename": "distribution/build.gradle", "status": "modified" }, { "diff": "@@ -55,6 +55,7 @@ LOG_DIR=\"/var/log/elasticsearch\"\n PLUGINS_DIR=\"/usr/share/elasticsearch/plugins\"\n PID_DIR=\"/var/run/elasticsearch\"\n DATA_DIR=\"/var/lib/elasticsearch\"\n+CONF_DIR=\"/etc/elasticsearch\"\n \n # Source the default env file\n if [ \"$SOURCE_ENV_FILE\" = \"true\" ]; then\n@@ -102,6 +103,12 @@ if [ \"$REMOVE_DIRS\" = \"true\" ]; then\n if [ -d \"$DATA_DIR\" ]; then\n rmdir --ignore-fail-on-non-empty \"$DATA_DIR\"\n fi\n+\n+ # delete the conf directory if and only if empty\n+ if [ -d \"$CONF_DIR\" ]; then\n+ rmdir --ignore-fail-on-non-empty \"$CONF_DIR\"\n+ fi\n+\n fi\n \n if [ \"$REMOVE_USER_AND_GROUP\" = \"true\" ]; then", "filename": "distribution/src/main/packaging/scripts/postrm", "status": "modified" }, { "diff": "@@ -64,4 +64,10 @@ if [ \"$STOP_REQUIRED\" = \"true\" ]; then\n echo \" OK\"\n fi\n \n+SCRIPTS_DIR=\"/etc/elasticsearch/scripts\"\n+# delete the scripts directory if and only if empty\n+if [ -d \"$SCRIPTS_DIR\" ]; then\n+ rmdir --ignore-fail-on-non-empty \"$SCRIPTS_DIR\"\n+fi\n+\n ${scripts.footer}", "filename": "distribution/src/main/packaging/scripts/prerm", "status": "modified" }, { "diff": "@@ -116,7 +116,7 @@ setup() {\n \n assert_file_not_exist \"/etc/elasticsearch\"\n assert_file_not_exist \"/etc/elasticsearch/elasticsearch.yml\"\n- assert_file_not_exist \"/etc/elasticsearch/jvm.options\"\n+ assert_file_not_exist \"/etc/elasticsearch/jvm.options\"\n assert_file_not_exist \"/etc/elasticsearch/logging.yml\"\n \n assert_file_not_exist \"/etc/init.d/elasticsearch\"\n@@ -125,7 +125,6 @@ setup() {\n assert_file_not_exist \"/etc/sysconfig/elasticsearch\"\n }\n \n-\n @test \"[RPM] reinstall package\" {\n rpm -i elasticsearch-$(cat version).rpm\n }\n@@ -134,14 +133,48 @@ setup() {\n rpm -qe 'elasticsearch'\n }\n \n-@test \"[RPM] verify package reinstallation\" {\n- verify_package_installation\n-}\n-\n @test \"[RPM] reremove package\" {\n+ echo \"# ping\" >> \"/etc/elasticsearch/elasticsearch.yml\"\n+ echo \"# ping\" >> \"/etc/elasticsearch/jvm.options\"\n+ echo \"# ping\" >> \"/etc/elasticsearch/logging.yml\"\n+ echo \"# ping\" >> \"/etc/elasticsearch/scripts/script\"\n rpm -e 'elasticsearch'\n }\n \n+@test \"[RPM] verify preservation\" {\n+ # The removal must disable the service\n+ # see prerm file\n+ if is_systemd; then\n+ run systemctl is-enabled elasticsearch.service\n+ [ \"$status\" -eq 1 ]\n+ fi\n+\n+ # Those directories are deleted when removing the package\n+ # see postrm file\n+ assert_file_not_exist \"/var/log/elasticsearch\"\n+ assert_file_not_exist \"/usr/share/elasticsearch/plugins\"\n+ assert_file_not_exist \"/var/run/elasticsearch\"\n+\n+ assert_file_not_exist \"/etc/elasticsearch/elasticsearch.yml\"\n+ assert_file_exist \"/etc/elasticsearch/elasticsearch.yml.rpmsave\"\n+ assert_file_not_exist \"/etc/elasticsearch/jvm.options\"\n+ assert_file_exist \"/etc/elasticsearch/jvm.options.rpmsave\"\n+ assert_file_not_exist \"/etc/elasticsearch/logging.yml\"\n+ assert_file_exist \"/etc/elasticsearch/logging.yml.rpmsave\"\n+ assert_file_exist \"/etc/elasticsearch/scripts.rpmsave\"\n+ assert_file_exist \"/etc/elasticsearch/scripts.rpmsave/script\"\n+\n+ assert_file_not_exist \"/etc/init.d/elasticsearch\"\n+ assert_file_not_exist \"/usr/lib/systemd/system/elasticsearch.service\"\n+\n+ assert_file_not_exist \"/etc/sysconfig/elasticsearch\"\n+}\n+\n+@test \"[RPM] finalize package removal\" {\n+ # cleanup\n+ rm -rf /etc/elasticsearch\n+}\n+\n @test \"[RPM] package has been removed again\" {\n run rpm -qe 'elasticsearch'\n [ \"$status\" -eq 1 ]", "filename": "qa/vagrant/src/test/resources/packaging/scripts/40_rpm_package.bats", "status": "modified" } ] }
{ "body": "Hi guys,\n\nwe have upgraded ElasticSearch from 2.3.0 and reindexed our geolocations so the latitude and longitude are stored separately. We have noticed that some of our visualisation started to fail after we add a filter based on geolocation rectangle. However, map visualisation are working just fine. The problem occurs when we include actual documents. In this case, we get some failed shards (usually 1 out of 5) and error: Invalid shift value (xx) in prefixCoded bytes (is encoded value really a geo point?).\n\nDetails:\nOur geolocation index is based on:\n\n```\n\"dynamic_templates\": [{\n....\n{\n \"ner_geo\": {\n \"mapping\": {\n \"type\": \"geo_point\",\n \"lat_lon\": true\n },\n \"path_match\": \"*.coordinates\"\n }\n }],\n```\n\nThe ok query with the error is as follows. If we change the query size to 0 (map visualizations example), the query completes without problem.\n\n```\n{\n \"size\": 100,\n \"aggs\": {\n \"2\": {\n \"geohash_grid\": {\n \"field\": \"authors.affiliation.coordinates\",\n \"precision\": 2\n }\n }\n },\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"query_string\": {\n \"analyze_wildcard\": true,\n \"query\": \"*\"\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"geo_bounding_box\": {\n \"authors.affiliation.coordinates\": {\n \"top_left\": {\n \"lat\": 61.10078883158897,\n \"lon\": -170.15625\n },\n \"bottom_right\": {\n \"lat\": -64.92354174306496,\n \"lon\": 118.47656249999999\n }\n }\n }\n }\n ],\n \"must_not\": []\n }\n }\n }\n },\n \"highlight\": {\n \"pre_tags\": [\n \"@kibana-highlighted-field@\"\n ],\n \"post_tags\": [\n \"@/kibana-highlighted-field@\"\n ],\n \"fields\": {\n \"*\": {}\n },\n \"require_field_match\": false,\n \"fragment_size\": 2147483647\n }\n}\n```\n\nElasticsearch version**: 2.3.0\nOS version**: Elasticsearch docker image with head plugin, marvel and big desk installed\n\nThank you for your help,\nregards,\nJakub Smid\n", "comments": [ { "body": "@jaksmid could you provide some documents and the stack trace that is produced when you see this exception please?\n", "created_at": "2016-04-06T11:08:13Z" }, { "body": "@jpountz given that this only happens with `size` > 0, I'm wondering if this highlighting trying to highlight the geo field? Perhaps with no documents on a particular shard?\n\n/cc @nknize \n", "created_at": "2016-04-06T11:09:22Z" }, { "body": "I can reproduce something that looks just like this with a lucene test if you apply the patch on https://issues.apache.org/jira/browse/LUCENE-7185\n\nI suspect it may happen with extreme values such as latitude = 90 or longitude = 180 which are used much more in tests with the patch. See seed:\n\n```\n [junit4] Suite: org.apache.lucene.spatial.geopoint.search.TestGeoPointQuery\n [junit4] IGNOR/A 0.01s J1 | TestGeoPointQuery.testRandomBig\n [junit4] > Assumption #1: 'nightly' test group is disabled (@Nightly())\n [junit4] IGNOR/A 0.00s J1 | TestGeoPointQuery.testRandomDistanceHuge\n [junit4] > Assumption #1: 'nightly' test group is disabled (@Nightly())\n [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestGeoPointQuery -Dtests.method=testAllLonEqual -Dtests.seed=4ABB96AB44F4796E -Dtests.locale=id-ID -Dtests.timezone=Pacific/Fakaofo -Dtests.asserts=true -Dtests.file.encoding=US-ASCII\n [junit4] ERROR 0.35s J1 | TestGeoPointQuery.testAllLonEqual <<<\n [junit4] > Throwable #1: java.lang.IllegalArgumentException: Illegal shift value, must be 32..63; got shift=0\n [junit4] > at __randomizedtesting.SeedInfo.seed([4ABB96AB44F4796E:DBB16756B45E397A]:0)\n [junit4] > at org.apache.lucene.spatial.util.GeoEncodingUtils.geoCodedToPrefixCodedBytes(GeoEncodingUtils.java:109)\n [junit4] > at org.apache.lucene.spatial.util.GeoEncodingUtils.geoCodedToPrefixCoded(GeoEncodingUtils.java:89)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum$Range.fillBytesRef(GeoPointPrefixTermsEnum.java:236)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointTermsEnum.nextRange(GeoPointTermsEnum.java:71)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum.nextRange(GeoPointPrefixTermsEnum.java:171)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum.nextSeekTerm(GeoPointPrefixTermsEnum.java:190)\n [junit4] > at org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:212)\n [junit4] > at org.apache.lucene.spatial.geopoint.search.GeoPointTermQueryConstantScoreWrapper$1.scorer(GeoPointTermQueryConstantScoreWrapper.java:110)\n [junit4] > at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)\n [junit4] > at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:644)\n [junit4] > at org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)\n [junit4] > at org.apache.lucene.search.BooleanWeight.optionalBulkScorer(BooleanWeight.java:231)\n [junit4] > at org.apache.lucene.search.BooleanWeight.booleanScorer(BooleanWeight.java:297)\n [junit4] > at org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:364)\n [junit4] > at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:644)\n [junit4] > at org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)\n [junit4] > at org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)\n [junit4] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)\n [junit4] > at org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)\n [junit4] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)\n [junit4] > at org.apache.lucene.spatial.util.BaseGeoPointTestCase.verifyRandomRectangles(BaseGeoPointTestCase.java:835)\n [junit4] > at org.apache.lucene.spatial.util.BaseGeoPointTestCase.verify(BaseGeoPointTestCase.java:763)\n [junit4] > at org.apache.lucene.spatial.util.BaseGeoPointTestCase.testAllLonEqual(BaseGeoPointTestCase.java:495)\n\n```\n", "created_at": "2016-04-07T07:17:50Z" }, { "body": "Hi @clintongormley, thank you for your message. \n\nThe stack trace is as follows:\n`RemoteTransportException[[elasticsearch_4][172.17.0.2:9300][indices:data/read/search[phase/fetch/id]]]; nested: FetchPhaseExecutionException[Fetch Failed [Failed to highlight field [cyberdyne_metadata.ner.mitie.model.DISEASE.tag]]]; nested: NumberFormatException[Invalid shift value (65) in prefixCoded bytes (is encoded value really a geo point?)];\nCaused by: FetchPhaseExecutionException[Fetch Failed [Failed to highlight field [cyberdyne_metadata.ner.mitie.model.DISEASE.tag]]]; nested: NumberFormatException[Invalid shift value (65) in prefixCoded bytes (is encoded value really a geo point?)];\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:123)\n at org.elasticsearch.search.highlight.HighlightPhase.hitExecute(HighlightPhase.java:126)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:188)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:592)\n at org.elasticsearch.search.action.SearchServiceTransportAction$FetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:408)\n at org.elasticsearch.search.action.SearchServiceTransportAction$FetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:405)\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:300)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.NumberFormatException: Invalid shift value (65) in prefixCoded bytes (is encoded value really a geo point?)\n at org.apache.lucene.spatial.util.GeoEncodingUtils.getPrefixCodedShift(GeoEncodingUtils.java:134)\n at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum.accept(GeoPointPrefixTermsEnum.java:219)\n at org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:232)\n at org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:67)\n at org.apache.lucene.search.ScoringRewrite.rewrite(ScoringRewrite.java:108)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:220)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:227)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:505)\n at org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:218)\n at org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:186)\n at org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:195)\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:108)\n ... 12 more`\n\nThe field cyberdyne_metadata.ner.mitie.model.DISEASE.tag should not be a geopoint according to the dynamic template.\n", "created_at": "2016-04-07T07:20:27Z" }, { "body": "@rmuir oh, good catch\n@clintongormley The stack trace indeed suggests that the issue is with highlighting on the geo field. Regardless of this bug, I wonder that we should fail early when highlighting on anything but text fields and/or exclude non-text fields from wildcard matching.\n", "created_at": "2016-04-07T07:33:06Z" }, { "body": "> I wonder that we should fail early when highlighting on anything but text fields and/or exclude non-text fields from wildcard matching.\n\n+1 to fail early if the user explicitly defined a non text field to highlight on and exclude non text fields when using wildcards\n", "created_at": "2016-04-07T08:40:43Z" }, { "body": "I was running into this bug during a live demo... Yes I know, I've should have tested all demo scenario's after updating ES :grimacing: . Anyway, +1 for fixing this!\n", "created_at": "2016-04-17T08:09:07Z" }, { "body": "-I´m having the same error. It's happends with doc having location and trying to use \n\"highlight\": {... \"require_field_match\": false ...}\n\nthanks!\n", "created_at": "2016-04-18T21:45:45Z" }, { "body": "I'm unclear as to what exactly is going on here, but I'm running into the same issue. I'm attempting to do a geo bounding box in Kibana while viewing the results in the Discover tab. Disabling highlighting in Kibana fixes the issue, but I would actually like to keep highlighting enabled, since it's super useful otherwise.\n\nIt sounds from what others are saying that this should fail when querying on _any_ non-string field, but I am not getting the same failure on numeric fields. Is it just an issue with geoip fields? I suppose another nice thing would be to explicitly allow for configuration of which fields should be highlighted in Kibana.\n", "created_at": "2016-05-03T01:52:24Z" }, { "body": "Please fix this issue.\n", "created_at": "2016-05-03T10:40:19Z" }, { "body": "I wrote two tests so that everyone can reproduce what happens easily: https://github.com/brwe/elasticsearch/commit/ffa242941e4ede34df67301f7b9d46ea8719cc22\n\nIn brief:\nThe plain highlighter tries to highlight whatever the BBQuery provides as terms in the text \"60,120\" if that is how the `geo_point` was indexed (if the point was indexed with `{\"lat\": 60, \"lon\": 120}` nothing will happen because we cannot even extract anything from the source). The terms in the text are provided to Lucene as a token steam with a keyword analyzer.\nIn Lucene, this token stream is converted this via a longish call stack into a terms enum. But this terms enum is pulled from the query that contains the terms that are to be highlighted. In this case we call `GeoPointMultiTermQuery.getTermsEnum(terms)` which wraps the term in a `GeoPointTermsEnum`. This enum tries to convert a prefix coded geo term back to something else but because it is really just the string \"60,120\" it throws the exception we see. \n\nI am unsure yet how a correct fix would look like but do wonder why we try highlingting on numeric and geo fields at all? If anyone has an opinion let me know.\n", "created_at": "2016-05-04T17:50:50Z" }, { "body": "I missed @jpountz comment:\n\n> Regardless of this bug, I wonder that we should fail early when highlighting on anything but text fields and/or exclude non-text fields from wildcard matching.\n\nI agree. Will make a pr for that.\n", "created_at": "2016-05-04T17:57:01Z" }, { "body": "@brwe you did something similar before: https://github.com/elastic/elasticsearch/pull/11364 - i would have thought that that PR should have fixed this issue?\n", "created_at": "2016-05-05T08:17:58Z" }, { "body": "@clintongormley Yes you are right. #11364 only addresses problems one gets when the way text is indexed is not compatible with the highlighter used. I do not remember why I did not exclude numeric fields then. \n", "created_at": "2016-05-05T09:15:10Z" }, { "body": "Great work. Tnx \n\n:sunglasses: \n", "created_at": "2016-05-07T13:15:25Z" }, { "body": "This is not fixed in 2.3.3 yet, correct?\n", "created_at": "2016-05-19T07:09:10Z" }, { "body": "@rodgermoore It should be fixed in 2.3.3, can you still reproduce the problem?\n", "created_at": "2016-05-19T07:13:30Z" }, { "body": "Ubuntu 14.04-04\nElasticsearch 2.3.3\nKibana 4.5.1\nJVM 1.8.0_66\n\nI am still able to reproduce this error in Kibana 4.5.1. I have a dashboard with a search panel with highlighting enabled. On the same Dashboard I have a tile map and after selecting an area in this map using the select function (draw a rectangle) I got the \"Invalid shift value (xx) in prefixCoded bytes (is encoded value really a geo point?)\" error.\n\nWhen I alter the json settings file of the search panel and remove highlighting the error does not pop-up.\n", "created_at": "2016-05-19T11:44:32Z" }, { "body": "@rodgermoore I cannot reproduce this but I might do something different from you. Here is my dashboard:\n\n![image](https://cloud.githubusercontent.com/assets/4320215/15393472/bd2b4cf2-1dcd-11e6-8ac1-cf6ba5e995b7.png)\n\nIs that what you did?\nCan you attach the whole stacktrace from the elasticsearch logs again? If you did not change the logging config the full search request should be in there. Also, if you can please add an example document.\n", "created_at": "2016-05-19T13:07:51Z" }, { "body": "I see you used \"text:blah\". I did not enter a search at all (so used the default wildcard) and then did the aggregation on the tile map. This resulted in the error. \n", "created_at": "2016-05-19T13:12:50Z" }, { "body": "I can remove the query and still get a result. Can you please attach the relevant part of the elasticsearch log? \n", "created_at": "2016-05-19T13:16:46Z" }, { "body": "Here you go:\n\n```\n[2016-05-19 15:23:08,270][DEBUG][action.search ] [Black King] All shards failed for phase: [query_fetch]\nRemoteTransportException[[Black King][192.168.48.18:9300][indices:data/read/search[phase/query+fetch]]]; nested: FetchPhaseExecutionException[Fetch Failed [Failed to highlight field [tags.nl]]]; nested: NumberFormatException[Invalid shift value (115) in prefixCoded bytes (is encoded value really a geo point?)];\nCaused by: FetchPhaseExecutionException[Fetch Failed [Failed to highlight field [tags.nl]]]; nested: NumberFormatException[Invalid shift value (115) in prefixCoded bytes (is encoded value really a geo point?)];\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:123)\n at org.elasticsearch.search.highlight.HighlightPhase.hitExecute(HighlightPhase.java:140)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:188)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:480)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:392)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:389)\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.NumberFormatException: Invalid shift value (115) in prefixCoded bytes (is encoded value really a geo point?)\n at org.apache.lucene.spatial.util.GeoEncodingUtils.getPrefixCodedShift(GeoEncodingUtils.java:134)\n at org.apache.lucene.spatial.geopoint.search.GeoPointPrefixTermsEnum.accept(GeoPointPrefixTermsEnum.java:219)\n at org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:232)\n at org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:67)\n at org.apache.lucene.search.ScoringRewrite.rewrite(ScoringRewrite.java:108)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:220)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:227)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:505)\n at org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:218)\n at org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:186)\n at org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:195)\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:108)\n ... 12 more\n```\n\nWe are using dynamic mapping and we dynamically analyse all string fields using the Dutch language analyzer. All string fields get a non analyzed field: \"field.raw\" and a Dutch analyzed field \"field.nl\". \n", "created_at": "2016-05-19T13:37:15Z" }, { "body": "Ah...I was hoping to get the actual request but it is not in the stacktrace after all. Can you also add the individual requests from the panels in your dashboard (in the spy tab) and a screenshot so I can see what the geo bounding box filter filters on? I could then try to reconstruct the request.\n\nAlso, are you sure you upgraded all nodes in the cluster? Check with `curl -XGET \"http://hostname:port/_nodes\"`. Would be great if you could add the output of that here too just to be sure. \n", "created_at": "2016-05-19T13:46:59Z" }, { "body": "I have got the exact same issue. I am running 2.3.3. All my nodes (1) are upgraded.\n", "created_at": "2016-05-19T14:17:36Z" }, { "body": "<img width=\"1676\" alt=\"screen shot 2016-05-19 at 16 29 15\" src=\"https://cloud.githubusercontent.com/assets/78766/15397413/7858191c-1de0-11e6-802b-773f4a7ecf79.png\">\n", "created_at": "2016-05-19T14:42:02Z" }, { "body": "Here you go.\n\nTile Map Query:\n\n```\n{\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"query_string\": {\n \"analyze_wildcard\": true,\n \"query\": \"*\"\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"geo_bounding_box\": {\n \"SomeGeoField\": {\n \"top_left\": {\n \"lat\": REMOVED,\n \"lon\": REMOVED\n },\n \"bottom_right\": {\n \"lat\": REMOVED,\n \"lon\": REMOVED\n }\n }\n },\n \"$state\": {\n \"store\": \"appState\"\n }\n },\n {\n \"query\": {\n \"query_string\": {\n \"query\": \"*\",\n \"analyze_wildcard\": true\n }\n }\n },\n {\n \"range\": {\n \"@timestamp\": {\n \"gte\": 1458485686484,\n \"lte\": 1463666086484,\n \"format\": \"epoch_millis\"\n }\n }\n }\n ],\n \"must_not\": []\n }\n }\n }\n },\n \"size\": 0,\n \"aggs\": {\n \"2\": {\n \"geohash_grid\": {\n \"field\": \"SomeGeoField\",\n \"precision\": 5\n }\n }\n }\n}\n```\n\nI'm using a single node cluster, here's the info:\n\n```\n{\n \"cluster_name\": \"elasticsearch\",\n \"nodes\": {\n \"RtBthRfeSOSud1XfRRAkSA\": {\n \"name\": \"Black King\",\n \"transport_address\": \"192.168.48.18:9300\",\n \"host\": \"192.168.48.18\",\n \"ip\": \"192.168.48.18\",\n \"version\": \"2.3.3\",\n \"build\": \"218bdf1\",\n \"http_address\": \"192.168.48.18:9200\",\n \"settings\": {\n \"pidfile\": \"/var/run/elasticsearch/elasticsearch.pid\",\n \"cluster\": {\n \"name\": \"elasticsearch\"\n },\n \"path\": {\n \"conf\": \"/etc/elasticsearch\",\n \"data\": \"/var/lib/elasticsearch\",\n \"logs\": \"/var/log/elasticsearch\",\n \"home\": \"/usr/share/elasticsearch\",\n \"repo\": [\n \"/home/somename/es_backups\"\n ]\n },\n \"name\": \"Black King\",\n \"client\": {\n \"type\": \"node\"\n },\n \"foreground\": \"false\",\n \"config\": {\n \"ignore_system_properties\": \"true\"\n },\n \"network\": {\n \"host\": \"0.0.0.0\"\n }\n },\n \"os\": {\n \"refresh_interval_in_millis\": 1000,\n \"name\": \"Linux\",\n \"arch\": \"amd64\",\n \"version\": \"3.19.0-59-generic\",\n \"available_processors\": 8,\n \"allocated_processors\": 8\n },\n \"process\": {\n \"refresh_interval_in_millis\": 1000,\n \"id\": 1685,\n \"mlockall\": false\n },\n \"jvm\": {\n \"pid\": 1685,\n \"version\": \"1.8.0_66\",\n \"vm_name\": \"Java HotSpot(TM) 64-Bit Server VM\",\n \"vm_version\": \"25.66-b17\",\n \"vm_vendor\": \"Oracle Corporation\",\n \"start_time_in_millis\": 1463663018422,\n \"mem\": {\n \"heap_init_in_bytes\": 6442450944,\n \"heap_max_in_bytes\": 6372720640,\n \"non_heap_init_in_bytes\": 2555904,\n \"non_heap_max_in_bytes\": 0,\n \"direct_max_in_bytes\": 6372720640\n },\n \"gc_collectors\": [\n \"ParNew\",\n \"ConcurrentMarkSweep\"\n ],\n \"memory_pools\": [\n \"Code Cache\",\n \"Metaspace\",\n \"Compressed Class Space\",\n \"Par Eden Space\",\n \"Par Survivor Space\",\n \"CMS Old Gen\"\n ],\n \"using_compressed_ordinary_object_pointers\": \"true\"\n },\n \"thread_pool\": {\n \"force_merge\": {\n \"type\": \"fixed\",\n \"min\": 1,\n \"max\": 1,\n \"queue_size\": -1\n },\n \"percolate\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 1000\n },\n \"fetch_shard_started\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 16,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"listener\": {\n \"type\": \"fixed\",\n \"min\": 4,\n \"max\": 4,\n \"queue_size\": -1\n },\n \"index\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 200\n },\n \"refresh\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 4,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"suggest\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 1000\n },\n \"generic\": {\n \"type\": \"cached\",\n \"keep_alive\": \"30s\",\n \"queue_size\": -1\n },\n \"warmer\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 4,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"search\": {\n \"type\": \"fixed\",\n \"min\": 13,\n \"max\": 13,\n \"queue_size\": 1000\n },\n \"flush\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 4,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"fetch_shard_store\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 16,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"management\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 5,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n },\n \"get\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 1000\n },\n \"bulk\": {\n \"type\": \"fixed\",\n \"min\": 8,\n \"max\": 8,\n \"queue_size\": 50\n },\n \"snapshot\": {\n \"type\": \"scaling\",\n \"min\": 1,\n \"max\": 4,\n \"keep_alive\": \"5m\",\n \"queue_size\": -1\n }\n },\n \"transport\": {\n \"bound_address\": [\n \"[::]:9300\"\n ],\n \"publish_address\": \"192.168.48.18:9300\",\n \"profiles\": {}\n },\n \"http\": {\n \"bound_address\": [\n \"[::]:9200\"\n ],\n \"publish_address\": \"192.168.48.18:9200\",\n \"max_content_length_in_bytes\": 104857600\n },\n \"plugins\": [],\n \"modules\": [\n {\n \"name\": \"lang-expression\",\n \"version\": \"2.3.3\",\n \"description\": \"Lucene expressions integration for Elasticsearch\",\n \"jvm\": true,\n \"classname\": \"org.elasticsearch.script.expression.ExpressionPlugin\",\n \"isolated\": true,\n \"site\": false\n },\n {\n \"name\": \"lang-groovy\",\n \"version\": \"2.3.3\",\n \"description\": \"Groovy scripting integration for Elasticsearch\",\n \"jvm\": true,\n \"classname\": \"org.elasticsearch.script.groovy.GroovyPlugin\",\n \"isolated\": true,\n \"site\": false\n },\n {\n \"name\": \"reindex\",\n \"version\": \"2.3.3\",\n \"description\": \"_reindex and _update_by_query APIs\",\n \"jvm\": true,\n \"classname\": \"org.elasticsearch.index.reindex.ReindexPlugin\",\n \"isolated\": true,\n \"site\": false\n }\n ]\n }\n }\n}\n```\n\nScreenshot, I had to clear out the data:\n\n![error_es](https://cloud.githubusercontent.com/assets/12231719/15397399/668a374c-1de0-11e6-903d-f929a2d9f0b2.PNG)\n", "created_at": "2016-05-19T14:42:12Z" }, { "body": "@rodgermoore does the query you provided work correctly? You said that it started working once you deleted the highlighting and this query doesn't contain highlighting. Could you provide the query that doesn't work?\n", "created_at": "2016-05-19T14:45:25Z" }, { "body": "It does has highlighting enabled. This is the json for the search panel: \n\n```\n{\n \"index\": \"someindex\",\n \"query\": {\n \"query_string\": {\n \"query\": \"*\",\n \"analyze_wildcard\": true\n }\n },\n \"filter\": [],\n \"highlight\": {\n \"pre_tags\": [\n \"@kibana-highlighted-field@\"\n ],\n \"post_tags\": [\n \"@/kibana-highlighted-field@\"\n ],\n \"fields\": {\n \"*\": {}\n },\n \"require_field_match\": false,\n \"fragment_size\": 2147483647\n }\n}\n```\n\nI can't show the actual data so I selected to show only the timestamp field in the search panel in the screenshot...\n\nWhen I change the json of the search panel to:\n\n```\n{\n \"index\": \"someindex\",\n \"filter\": [],\n \"query\": {\n \"query_string\": {\n \"query\": \"*\",\n \"analyze_wildcard\": true\n }\n }\n}\n```\n\nThe error disappears.\n", "created_at": "2016-05-19T14:51:27Z" }, { "body": "If my understanding of the patch is correct, it shouldn't matter whether Kibana is including the highlighting field. Elasticsearch should only be trying to highlight string fields, even if a wildcard is being used.\n", "created_at": "2016-05-19T14:54:44Z" }, { "body": "Ok, I managed to reproduce it on 2.3.3. It happens with `\"geohash\": true` in the mapping. \n\nSteps are:\n\n```\nDELETE test\nPUT test \n{\n \"mappings\": {\n \"doc\": {\n \"properties\": {\n \"point\": {\n \"type\": \"geo_point\",\n \"geohash\": true\n }\n }\n }\n }\n}\n\nPUT test/doc/1\n{\n \"point\": \"60.12,100.34\"\n}\n\nPOST test/_search\n{\n \"query\": {\n \"geo_bounding_box\": {\n \"point\": {\n \"top_left\": {\n \"lat\": 61.10078883158897,\n \"lon\": -170.15625\n },\n \"bottom_right\": {\n \"lat\": -64.92354174306496,\n \"lon\": 118.47656249999999\n }\n }\n }\n },\n \"highlight\": {\n \"fields\": {\n \"*\": {}\n }\n }\n}\n```\n\nSorry, I did not think of that. I work on another fix.\n", "created_at": "2016-05-19T16:23:44Z" } ], "number": 17537, "title": "Invalid shift value (xx) in prefixCoded bytes (is encoded value really a geo point?)" }
{ "body": "… in fieldname\n\nWe should prevent highlighting if a field is anything but a text or keyword field.\nHowever, someone might implement a custom field type that has text and still want to\nhighlight on that. We cannot know in advance if the highlighter will be able to\nhighlight such a field and so we do the following:\nIf the field is only highlighted because the field matches a wildcard we assume\nit was a mistake and do not process it.\nIf the field was explicitly given we assume that whoever issued the query knew\nwhat they were doing and try to highlight anyway.\n\ncloses #17537\n\nNote that with this pr if a user adds the full name of a `geo_point` field to the highlight part of the request they will still get the exception seen in #17537.\n\nAnother option would be to list all the fields that we know we cannot highlight and exclude them in the HighlightPhase. But then someone with a custom mapper might run into issues when using wildcards in the highlight request.\n", "number": 18183, "review_comments": [ { "body": "the comment says text or keyword but the code tests text or string?\n", "created_at": "2016-05-06T12:09:10Z" }, { "body": "That is a bug. I wanted to make that keyword or text. I pushed a commit. \n", "created_at": "2016-05-06T12:30:48Z" }, { "body": "@brwe Note that in 5.x we can have text, keyword AND string fields\n", "created_at": "2016-05-06T13:24:03Z" }, { "body": "like Clint said, we should allow string fields too (which will continue to exist on 2.x indices)\n", "created_at": "2016-05-06T13:28:03Z" }, { "body": "Indeed. I got confused with the options and versions. I pushed another commit.\n", "created_at": "2016-05-06T14:28:28Z" }, { "body": "this says text or keyword but the code seems to be doing text or string\n", "created_at": "2016-05-06T14:33:52Z" } ], "title": "Exclude all but string fields from highlighting if wildcards are used…" }
{ "commits": [ { "message": "Exclude all but string fields from highlighting if wildcards are used in fieldname\n\nWe should prevent highlighting if a field is anything but a text or keyword field.\nHowever, someone might implement a custom field type that has text and still want to\nhighlight on that. We cannot know in advance if the highlighter will be able to\nhighlight such a field and so we do the following:\nIf the field is only highlighted because the field matches a wildcard we assume\nit was a mistake and do not process it.\nIf the field was explicitly given we assume that whoever issued the query knew\nwhat they were doing and try to highlight anyway.\n\ncloses #17537" }, { "message": "keyword fields should also be highlighted" }, { "message": "fix highlighing for old version indices with string fields" }, { "message": "add string to documentation" } ], "files": [ { "diff": "@@ -26,6 +26,9 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.core.KeywordFieldMapper;\n+import org.elasticsearch.index.mapper.core.StringFieldMapper;\n+import org.elasticsearch.index.mapper.core.TextFieldMapper;\n import org.elasticsearch.index.mapper.internal.SourceFieldMapper;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.fetch.FetchSubPhase;\n@@ -102,6 +105,21 @@ public void hitExecute(SearchContext context, HitContext hitContext) {\n continue;\n }\n \n+ // We should prevent highlighting if a field is anything but a text or keyword field.\n+ // However, someone might implement a custom field type that has text and still want to\n+ // highlight on that. We cannot know in advance if the highlighter will be able to\n+ // highlight such a field and so we do the following:\n+ // If the field is only highlighted because the field matches a wildcard we assume\n+ // it was a mistake and do not process it.\n+ // If the field was explicitly given we assume that whoever issued the query knew\n+ // what they were doing and try to highlight anyway.\n+ if (fieldNameContainsWildcards) {\n+ if (fieldMapper.fieldType().typeName().equals(TextFieldMapper.CONTENT_TYPE) == false &&\n+ fieldMapper.fieldType().typeName().equals(KeywordFieldMapper.CONTENT_TYPE) == false &&\n+ fieldMapper.fieldType().typeName().equals(StringFieldMapper.CONTENT_TYPE) == false) {\n+ continue;\n+ }\n+ }\n String highlighterType = field.fieldOptions().highlighterType();\n if (highlighterType == null) {\n for(String highlighterCandidate : STANDARD_HIGHLIGHTERS_BY_PRECEDENCE) {", "filename": "core/src/main/java/org/elasticsearch/search/highlight/HighlightPhase.java", "status": "modified" }, { "diff": "@@ -43,6 +43,7 @@ public void onModule(IndicesModule indicesModule) {\n indicesModule.registerMapper(EXTERNAL, new ExternalMapper.TypeParser(EXTERNAL, \"foo\"));\n indicesModule.registerMapper(EXTERNAL_BIS, new ExternalMapper.TypeParser(EXTERNAL_BIS, \"bar\"));\n indicesModule.registerMapper(EXTERNAL_UPPER, new ExternalMapper.TypeParser(EXTERNAL_UPPER, \"FOO BAR\"));\n+ indicesModule.registerMapper(FakeStringFieldMapper.CONTENT_TYPE, new FakeStringFieldMapper.TypeParser());\n }\n \n-}\n\\ No newline at end of file\n+}", "filename": "core/src/test/java/org/elasticsearch/index/mapper/externalvalues/ExternalMapperPlugin.java", "status": "modified" }, { "diff": "@@ -25,10 +25,12 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.search.highlight.HighlightBuilder;\n import org.elasticsearch.test.ESIntegTestCase;\n \n import java.util.Collection;\n \n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n \n public class ExternalValuesMapperIntegrationIT extends ESIntegTestCase {\n@@ -37,6 +39,54 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n return pluginList(ExternalMapperPlugin.class);\n }\n \n+ public void testHighlightingOnCustomString() throws Exception {\n+ prepareCreate(\"test-idx\").addMapping(\"type\",\n+ XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\").field(\"type\", FakeStringFieldMapper.CONTENT_TYPE).endObject()\n+ .endObject()\n+ .endObject().endObject()).execute().get();\n+ ensureYellow(\"test-idx\");\n+\n+ index(\"test-idx\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"field\", \"Every day is exactly the same\")\n+ .endObject());\n+ refresh();\n+\n+ SearchResponse response;\n+ // test if the highlighting is excluded when we use wildcards\n+ response = client().prepareSearch(\"test-idx\")\n+ .setQuery(QueryBuilders.matchQuery(\"field\", \"exactly the same\"))\n+ .highlighter(new HighlightBuilder().field(\"*\"))\n+ .execute().actionGet();\n+ assertSearchResponse(response);\n+ assertThat(response.getHits().getTotalHits(), equalTo(1L));\n+ assertThat(response.getHits().getAt(0).getHighlightFields().size(), equalTo(0));\n+\n+ // make sure it is not excluded when we explicitly provide the fieldname\n+ response = client().prepareSearch(\"test-idx\")\n+ .setQuery(QueryBuilders.matchQuery(\"field\", \"exactly the same\"))\n+ .highlighter(new HighlightBuilder().field(\"field\"))\n+ .execute().actionGet();\n+ assertSearchResponse(response);\n+ assertThat(response.getHits().getTotalHits(), equalTo(1L));\n+ assertThat(response.getHits().getAt(0).getHighlightFields().size(), equalTo(1));\n+ assertThat(response.getHits().getAt(0).getHighlightFields().get(\"field\").fragments()[0].string(), equalTo(\"Every day is \" +\n+ \"<em>exactly</em> <em>the</em> <em>same</em>\"));\n+\n+ // make sure it is not excluded when we explicitly provide the fieldname and a wildcard\n+ response = client().prepareSearch(\"test-idx\")\n+ .setQuery(QueryBuilders.matchQuery(\"field\", \"exactly the same\"))\n+ .highlighter(new HighlightBuilder().field(\"*\").field(\"field\"))\n+ .execute().actionGet();\n+ assertSearchResponse(response);\n+ assertThat(response.getHits().getTotalHits(), equalTo(1L));\n+ assertThat(response.getHits().getAt(0).getHighlightFields().size(), equalTo(1));\n+ assertThat(response.getHits().getAt(0).getHighlightFields().get(\"field\").fragments()[0].string(), equalTo(\"Every day is \" +\n+ \"<em>exactly</em> <em>the</em> <em>same</em>\"));\n+ }\n+\n public void testExternalValues() throws Exception {\n prepareCreate(\"test-idx\").addMapping(\"type\",\n XContentFactory.jsonBuilder().startObject().startObject(\"type\")", "filename": "core/src/test/java/org/elasticsearch/index/mapper/externalvalues/ExternalValuesMapperIntegrationIT.java", "status": "modified" }, { "diff": "@@ -0,0 +1,193 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.externalvalues;\n+\n+import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.SortedSetDocValuesField;\n+import org.apache.lucene.index.IndexOptions;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.MultiTermQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.RegexpQuery;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.Mapper;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.ParseContext;\n+import org.elasticsearch.index.mapper.core.StringFieldMapper;\n+import org.elasticsearch.index.query.QueryShardContext;\n+\n+import java.io.IOException;\n+import java.util.List;\n+import java.util.Map;\n+\n+import static org.elasticsearch.index.mapper.core.TypeParsers.parseTextField;\n+\n+// Like a String mapper but with very few options. We just use it to test if highlighting on a custom string mapped field works as expected.\n+public class FakeStringFieldMapper extends FieldMapper {\n+\n+ public static final String CONTENT_TYPE = \"fake_string\";\n+\n+ public static class Defaults {\n+\n+ public static final MappedFieldType FIELD_TYPE = new FakeStringFieldType();\n+\n+ static {\n+ FIELD_TYPE.freeze();\n+ }\n+ }\n+\n+ public static class Builder extends FieldMapper.Builder<Builder, FakeStringFieldMapper> {\n+\n+ public Builder(String name) {\n+ super(name, Defaults.FIELD_TYPE, Defaults.FIELD_TYPE);\n+ builder = this;\n+ }\n+\n+ @Override\n+ public FakeStringFieldType fieldType() {\n+ return (FakeStringFieldType) super.fieldType();\n+ }\n+\n+ @Override\n+ public FakeStringFieldMapper build(BuilderContext context) {\n+ setupFieldType(context);\n+ return new FakeStringFieldMapper(\n+ name, fieldType(), defaultFieldType,\n+ context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo);\n+ }\n+ }\n+\n+ public static class TypeParser implements Mapper.TypeParser {\n+\n+ public TypeParser() {\n+ }\n+\n+ @Override\n+ public Mapper.Builder parse(String fieldName, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n+ FakeStringFieldMapper.Builder builder = new FakeStringFieldMapper.Builder(fieldName);\n+ parseTextField(builder, fieldName, node, parserContext);\n+ return builder;\n+ }\n+ }\n+\n+ public static final class FakeStringFieldType extends MappedFieldType {\n+\n+\n+ public FakeStringFieldType() {\n+ }\n+\n+ protected FakeStringFieldType(FakeStringFieldType ref) {\n+ super(ref);\n+ }\n+\n+ public FakeStringFieldType clone() {\n+ return new FakeStringFieldType(this);\n+ }\n+\n+ @Override\n+ public String typeName() {\n+ return CONTENT_TYPE;\n+ }\n+\n+ @Override\n+ public Query nullValueQuery() {\n+ if (nullValue() == null) {\n+ return null;\n+ }\n+ return termQuery(nullValue(), null);\n+ }\n+\n+ @Override\n+ public Query regexpQuery(String value, int flags, int maxDeterminizedStates, @Nullable MultiTermQuery.RewriteMethod method,\n+ @Nullable QueryShardContext context) {\n+ RegexpQuery query = new RegexpQuery(new Term(name(), indexedValueForSearch(value)), flags, maxDeterminizedStates);\n+ if (method != null) {\n+ query.setRewriteMethod(method);\n+ }\n+ return query;\n+ }\n+ }\n+\n+ protected FakeStringFieldMapper(String simpleName, FakeStringFieldType fieldType, MappedFieldType defaultFieldType,\n+ Settings indexSettings, MultiFields multiFields, CopyTo copyTo) {\n+ super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo);\n+ }\n+\n+ @Override\n+ protected StringFieldMapper clone() {\n+ return (StringFieldMapper) super.clone();\n+ }\n+\n+ @Override\n+ protected boolean customBoost() {\n+ return true;\n+ }\n+\n+ @Override\n+ protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n+ StringFieldMapper.ValueAndBoost valueAndBoost = parseCreateFieldForString(context, fieldType().boost());\n+ if (valueAndBoost.value() == null) {\n+ return;\n+ }\n+ if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) {\n+ Field field = new Field(fieldType().name(), valueAndBoost.value(), fieldType());\n+ fields.add(field);\n+ }\n+ if (fieldType().hasDocValues()) {\n+ fields.add(new SortedSetDocValuesField(fieldType().name(), new BytesRef(valueAndBoost.value())));\n+ }\n+ }\n+\n+ public static StringFieldMapper.ValueAndBoost parseCreateFieldForString(ParseContext context, float defaultBoost) throws IOException {\n+ if (context.externalValueSet()) {\n+ return new StringFieldMapper.ValueAndBoost(context.externalValue().toString(), defaultBoost);\n+ }\n+ XContentParser parser = context.parser();\n+ return new StringFieldMapper.ValueAndBoost(parser.textOrNull(), defaultBoost);\n+ }\n+\n+ @Override\n+ protected String contentType() {\n+ return CONTENT_TYPE;\n+ }\n+\n+ @Override\n+ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n+ super.doMerge(mergeWith, updateAllTypes);\n+ }\n+\n+ @Override\n+ public FakeStringFieldType fieldType() {\n+ return (FakeStringFieldType) super.fieldType();\n+ }\n+\n+ @Override\n+ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n+ super.doXContentBody(builder, includeDefaults, params);\n+ doXContentAnalyzers(builder, includeDefaults);\n+ }\n+\n+}", "filename": "core/src/test/java/org/elasticsearch/index/mapper/externalvalues/FakeStringFieldMapper.java", "status": "added" }, { "diff": "@@ -19,10 +19,11 @@\n package org.elasticsearch.search.highlight;\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n-\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.Settings.Builder;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -36,15 +37,18 @@\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.index.search.MatchQuery;\n import org.elasticsearch.index.search.MatchQuery.Type;\n+import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n import org.elasticsearch.search.highlight.HighlightBuilder.Field;\n import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.InternalSettingsPlugin;\n import org.hamcrest.Matcher;\n import org.hamcrest.Matchers;\n \n import java.io.IOException;\n+import java.util.Collection;\n import java.util.HashMap;\n import java.util.Map;\n \n@@ -85,6 +89,12 @@\n import static org.hamcrest.Matchers.startsWith;\n \n public class HighlighterSearchIT extends ESIntegTestCase {\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return pluginList(InternalSettingsPlugin.class);\n+ }\n+\n public void testHighlightingWithWildcardName() throws IOException {\n // test the kibana case with * as fieldname that will try highlight all fields including meta fields\n XContentBuilder mappings = jsonBuilder();\n@@ -2542,4 +2552,90 @@ private void phraseBoostTestCase(String highlighterType) {\n response = search.setQuery(boostingQuery(phrase, terms).boost(1).negativeBoost(1/boost)).get();\n assertHighlight(response, 0, \"field1\", 0, 1, highlightedMatcher);\n }\n+\n+ public void testGeoFieldHighlighting() throws IOException {\n+ // check that we do not get an exception for geo_point fields in case someone tries to highlight\n+ // it accidential with a wildcard\n+ // see https://github.com/elastic/elasticsearch/issues/17537\n+ XContentBuilder mappings = jsonBuilder();\n+ mappings.startObject();\n+ mappings.startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"geo_point\")\n+ .field(\"type\", \"geo_point\")\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ mappings.endObject();\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"type\", mappings));\n+ ensureYellow();\n+\n+ client().prepareIndex(\"test\", \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"geo_point\", \"60.12,100.34\").endObject())\n+ .get();\n+ refresh();\n+ SearchResponse search = client().prepareSearch().setSource(\n+ new SearchSourceBuilder().query(QueryBuilders.geoBoundingBoxQuery(\"geo_point\").setCorners(61.10078883158897, -170.15625,\n+ -64.92354174306496, 118.47656249999999)).highlighter(new HighlightBuilder().field(\"*\"))).get();\n+ assertNoFailures(search);\n+ assertThat(search.getHits().totalHits(), equalTo(1L));\n+ }\n+\n+ public void testKeywordFieldHighlighting() throws IOException {\n+ // check that keyword highlighting works\n+ XContentBuilder mappings = jsonBuilder();\n+ mappings.startObject();\n+ mappings.startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"keyword_field\")\n+ .field(\"type\", \"keyword\")\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ mappings.endObject();\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"type\", mappings));\n+ ensureYellow();\n+\n+ client().prepareIndex(\"test\", \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"keyword_field\", \"some text\").endObject())\n+ .get();\n+ refresh();\n+ SearchResponse search = client().prepareSearch().setSource(\n+ new SearchSourceBuilder().query(QueryBuilders.matchQuery(\"keyword_field\", \"some text\")).highlighter(new HighlightBuilder().field(\"*\")))\n+ .get();\n+ assertNoFailures(search);\n+ assertThat(search.getHits().totalHits(), equalTo(1L));\n+ assertThat(search.getHits().getAt(0).getHighlightFields().get(\"keyword_field\").getFragments()[0].string(), equalTo(\"<em>some text</em>\"));\n+ }\n+\n+ public void testStringFieldHighlighting() throws IOException {\n+ // check that string field highlighting on old indexes works\n+ XContentBuilder mappings = jsonBuilder();\n+ mappings.startObject();\n+ mappings.startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"string_field\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ mappings.endObject();\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"type\", mappings)\n+ .setSettings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_3_2)));\n+ ensureYellow();\n+\n+ client().prepareIndex(\"test\", \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"string_field\", \"some text\").endObject())\n+ .get();\n+ refresh();\n+ SearchResponse search = client().prepareSearch().setSource(\n+ new SearchSourceBuilder().query(QueryBuilders.matchQuery(\"string_field\", \"some text\")).highlighter(new HighlightBuilder().field(\"*\")))\n+ .get();\n+ assertNoFailures(search);\n+ assertThat(search.getHits().totalHits(), equalTo(1L));\n+ assertThat(search.getHits().getAt(0).getHighlightFields().get(\"string_field\").getFragments()[0].string(), equalTo(\"<em>some</em> <em>text</em>\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/highlight/HighlighterSearchIT.java", "status": "modified" }, { "diff": "@@ -35,7 +35,10 @@ be used for highlighting if it mapped to have `store` set to `true`.\n ==================================\n \n The field name supports wildcard notation. For example, using `comment_*`\n-will cause all fields that match the expression to be highlighted.\n+will cause all <<text,text>> and <<keyword,keyword>> fields (and <<string,string>>\n+from versions before 5.0) that match the expression to be highlighted.\n+Note that all other fields will not be highlighted. If you use a custom mapper and want to\n+highlight on a field anyway, you have to provide the field name explicitly.\n \n [[plain-highlighter]]\n ==== Plain highlighter", "filename": "docs/reference/search/request/highlighting.asciidoc", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.0.0-alpha2\n**Description of the problem including expected versus actual behavior**: java.util.ConcurrentModificationException could be sent by ingest service\n**Provide logs (if relevant)**:\n\n```\n[2016-05-04 09:53:17,737][WARN ][cluster.service ] [Airstrike] failed to notify ClusterStateListener\njava.util.ConcurrentModificationException\n at java.util.HashMap$HashIterator.nextNode(HashMap.java:1429)\n at java.util.HashMap$KeyIterator.next(HashMap.java:1453)\n at org.elasticsearch.ingest.PipelineExecutionService.updatePipelineStats(PipelineExecutionService.java:127)\n at org.elasticsearch.ingest.PipelineExecutionService.clusterChanged(PipelineExecutionService.java:120)\n at org.elasticsearch.cluster.service.ClusterService.runTasksForExecutor(ClusterService.java:652)\n at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:814)\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:392)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:237)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:200)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nAccording to @martijnvg:\n\n> removing an element from a map while iterating its keys causes this expection\n> instead an iterator should be used\n", "comments": [], "number": 18126, "title": "java.util.ConcurrentModificationException could be sent by ingest service" }
{ "body": "Due to trying to modify a map while iterating it, a concurrent modification\nin the pipeline stats could be thrown. This uses an iterator to prevent this.\n\nCloses #18126\n", "number": 18177, "review_comments": [], "title": "Pipeline Stats: Fix concurrent modification exception" }
{ "commits": [ { "message": "Pipeline Stats: Fix concurrent modification exception\n\nDue to trying to modify a map while iterating it, a concurrent modification\nin the pipeline stats could be thrown. This uses an iterator to prevent this.\n\nCloses #18126" } ], "files": [ { "diff": "@@ -35,6 +35,7 @@\n \n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.Iterator;\n import java.util.Map;\n import java.util.Optional;\n import java.util.concurrent.TimeUnit;\n@@ -124,9 +125,11 @@ public void clusterChanged(ClusterChangedEvent event) {\n void updatePipelineStats(IngestMetadata ingestMetadata) {\n boolean changed = false;\n Map<String, StatsHolder> newStatsPerPipeline = new HashMap<>(statsHolderPerPipeline);\n- for (String pipeline : newStatsPerPipeline.keySet()) {\n+ Iterator<String> iterator = newStatsPerPipeline.keySet().iterator();\n+ while (iterator.hasNext()) {\n+ String pipeline = iterator.next();\n if (ingestMetadata.getPipelines().containsKey(pipeline) == false) {\n- newStatsPerPipeline.remove(pipeline);\n+ iterator.remove();\n changed = true;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java", "status": "modified" }, { "diff": "@@ -46,11 +46,13 @@\n import java.util.function.Consumer;\n \n import static org.hamcrest.Matchers.equalTo;\n-import static org.mockito.Matchers.anyBoolean;\n-import static org.mockito.Matchers.eq;\n+import static org.hamcrest.Matchers.hasKey;\n+import static org.hamcrest.Matchers.not;\n import static org.mockito.Matchers.any;\n+import static org.mockito.Matchers.anyBoolean;\n import static org.mockito.Matchers.anyString;\n import static org.mockito.Matchers.argThat;\n+import static org.mockito.Matchers.eq;\n import static org.mockito.Mockito.doAnswer;\n import static org.mockito.Mockito.doThrow;\n import static org.mockito.Mockito.mock;\n@@ -380,6 +382,22 @@ public void testStats() throws Exception {\n assertThat(ingestStats.getTotalStats().getIngestCount(), equalTo(2L));\n }\n \n+ // issue: https://github.com/elastic/elasticsearch/issues/18126\n+ public void testUpdatingStatsWhenRemovingPipelineWorks() throws Exception {\n+ Map<String, PipelineConfiguration> configurationMap = new HashMap<>();\n+ configurationMap.put(\"_id1\", new PipelineConfiguration(\"_id1\", new BytesArray(\"{}\")));\n+ configurationMap.put(\"_id2\", new PipelineConfiguration(\"_id2\", new BytesArray(\"{}\")));\n+ executionService.updatePipelineStats(new IngestMetadata(configurationMap));\n+ assertThat(executionService.stats().getStatsPerPipeline(), hasKey(\"_id1\"));\n+ assertThat(executionService.stats().getStatsPerPipeline(), hasKey(\"_id2\"));\n+\n+ configurationMap = new HashMap<>();\n+ configurationMap.put(\"_id3\", new PipelineConfiguration(\"_id3\", new BytesArray(\"{}\")));\n+ executionService.updatePipelineStats(new IngestMetadata(configurationMap));\n+ assertThat(executionService.stats().getStatsPerPipeline(), not(hasKey(\"_id1\")));\n+ assertThat(executionService.stats().getStatsPerPipeline(), not(hasKey(\"_id2\")));\n+ }\n+\n private IngestDocument eqID(String index, String type, String id, Map<String, Object> source) {\n return argThat(new IngestDocumentMatcher(index, type, id, source));\n }", "filename": "core/src/test/java/org/elasticsearch/ingest/PipelineExecutionServiceTests.java", "status": "modified" } ] }
{ "body": "The RPM package for Elasticsearch is declaring `/etc/sysconfig` and `/var/run`. It does not need to because there are already going to be in place on any system where the RPM is relevant, and it can conflict with packages like the `filesystem` package for Amazon Linux that build the filesystem hierarchy there.\n", "comments": [], "number": 18162, "title": "RPM package declares /etc/sysconfig and /var/run" }
{ "body": "With this change, the contents of the rpm are now this:\n\n```\n/etc/elasticsearch\n/etc/elasticsearch/elasticsearch.yml\n/etc/elasticsearch/jvm.options\n/etc/elasticsearch/logging.yml\n/etc/elasticsearch/scripts\n/etc/init.d/elasticsearch\n/etc/sysconfig/elasticsearch\n/usr/lib/sysctl.d/elasticsearch.conf\n/usr/lib/systemd/system/elasticsearch.service\n/usr/lib/tmpfiles.d/elasticsearch.conf\n/usr/share/elasticsearch/LICENSE.txt\n/usr/share/elasticsearch/NOTICE.txt\n/usr/share/elasticsearch/README.textile\n/usr/share/elasticsearch/bin/elasticsearch\n/usr/share/elasticsearch/bin/elasticsearch-plugin\n/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec\n/usr/share/elasticsearch/bin/elasticsearch.in.sh\n/usr/share/elasticsearch/lib/HdrHistogram-2.1.6.jar\n/usr/share/elasticsearch/lib/apache-log4j-extras-1.2.17.jar\n/usr/share/elasticsearch/lib/elasticsearch-5.0.0-alpha2-SNAPSHOT.jar\n/usr/share/elasticsearch/lib/hppc-0.7.1.jar\n/usr/share/elasticsearch/lib/jackson-core-2.7.1.jar\n/usr/share/elasticsearch/lib/jackson-dataformat-cbor-2.7.1.jar\n/usr/share/elasticsearch/lib/jackson-dataformat-smile-2.7.1.jar\n/usr/share/elasticsearch/lib/jackson-dataformat-yaml-2.7.1.jar\n/usr/share/elasticsearch/lib/jna-4.1.0.jar\n/usr/share/elasticsearch/lib/joda-convert-1.2.jar\n/usr/share/elasticsearch/lib/joda-time-2.8.2.jar\n/usr/share/elasticsearch/lib/jopt-simple-4.9.jar\n/usr/share/elasticsearch/lib/jts-1.13.jar\n/usr/share/elasticsearch/lib/log4j-1.2.17.jar\n/usr/share/elasticsearch/lib/lucene-analyzers-common-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-backward-codecs-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-core-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-grouping-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-highlighter-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-join-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-memory-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-misc-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-queries-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-queryparser-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-sandbox-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-spatial-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-spatial-extras-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-spatial3d-6.0.0.jar\n/usr/share/elasticsearch/lib/lucene-suggest-6.0.0.jar\n/usr/share/elasticsearch/lib/netty-3.10.5.Final.jar\n/usr/share/elasticsearch/lib/securesm-1.0.jar\n/usr/share/elasticsearch/lib/spatial4j-0.6.jar\n/usr/share/elasticsearch/lib/t-digest-3.0.jar\n/usr/share/elasticsearch/modules/ingest-grok/ingest-grok-5.0.0-alpha2-SNAPSHOT.jar\n/usr/share/elasticsearch/modules/ingest-grok/jcodings-1.0.12.jar\n/usr/share/elasticsearch/modules/ingest-grok/joni-2.1.6.jar\n/usr/share/elasticsearch/modules/ingest-grok/plugin-descriptor.properties\n/usr/share/elasticsearch/modules/lang-expression/antlr4-runtime-4.5.1-1.jar\n/usr/share/elasticsearch/modules/lang-expression/asm-5.0.4.jar\n/usr/share/elasticsearch/modules/lang-expression/asm-commons-5.0.4.jar\n/usr/share/elasticsearch/modules/lang-expression/asm-tree-5.0.4.jar\n/usr/share/elasticsearch/modules/lang-expression/lang-expression-5.0.0-alpha2-SNAPSHOT.jar\n/usr/share/elasticsearch/modules/lang-expression/lucene-expressions-6.0.0.jar\n/usr/share/elasticsearch/modules/lang-expression/plugin-descriptor.properties\n/usr/share/elasticsearch/modules/lang-expression/plugin-security.policy\n/usr/share/elasticsearch/modules/lang-groovy/groovy-2.4.6-indy.jar\n/usr/share/elasticsearch/modules/lang-groovy/lang-groovy-5.0.0-alpha2-SNAPSHOT.jar\n/usr/share/elasticsearch/modules/lang-groovy/plugin-descriptor.properties\n/usr/share/elasticsearch/modules/lang-groovy/plugin-security.policy\n/usr/share/elasticsearch/modules/lang-mustache/compiler-0.9.1.jar\n/usr/share/elasticsearch/modules/lang-mustache/lang-mustache-5.0.0-alpha2-SNAPSHOT.jar\n/usr/share/elasticsearch/modules/lang-mustache/plugin-descriptor.properties\n/usr/share/elasticsearch/modules/lang-mustache/plugin-security.policy\n/usr/share/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.1-1.jar\n/usr/share/elasticsearch/modules/lang-painless/asm-5.0.4.jar\n/usr/share/elasticsearch/modules/lang-painless/asm-commons-5.0.4.jar\n/usr/share/elasticsearch/modules/lang-painless/asm-tree-5.0.4.jar\n/usr/share/elasticsearch/modules/lang-painless/lang-painless-5.0.0-alpha2-SNAPSHOT.jar\n/usr/share/elasticsearch/modules/lang-painless/plugin-descriptor.properties\n/usr/share/elasticsearch/modules/lang-painless/plugin-security.policy\n/usr/share/elasticsearch/modules/reindex/plugin-descriptor.properties\n/usr/share/elasticsearch/modules/reindex/reindex-5.0.0-alpha2-SNAPSHOT.jar\n/usr/share/elasticsearch/plugins\n/var/lib/elasticsearch\n/var/log/elasticsearch\n/var/run/elasticsearch\n```\n\ncloses #18162\n", "number": 18170, "review_comments": [], "title": "Packaging: Make rpm not include parent dirs" }
{ "commits": [ { "message": "Packaging: Make rpm not include parent dirs\n\ncloses #18162" } ], "files": [ { "diff": "@@ -40,6 +40,7 @@ task buildRpm(type: Rpm) {\n vendor 'Elasticsearch'\n dirMode 0755\n fileMode 0644\n+ addParentDirs false\n // TODO ospackage doesn't support icon but we used to have one\n }\n ", "filename": "distribution/rpm/build.gradle", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.2.0.1 (yum repo)\n\n**JVM version**: openjdk version \"1.8.0_71\"\n\n**OS version**: CentOS 7.2\n\n**Description of the problem including expected versus actual behavior**:\n\nThe elasticsearch plugin install script is not honouring the JAVA_OPTS environment variables anymore. this is needed to provide a http proxy for puppet runs.\n\n**Steps to reproduce**:\n1. export JAVA=\"-Dhttp.proxyHost=contentproxy.example.com -Dhttp.proxyPort=3128 -Dhttps.proxyHost=contentproxy.example.com -Dhttps.proxyPort=3128\"\n2. /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf\n3. -> timeout\n\nplease change the last line of the plugin script from:\n\n```\neval \"$JAVA\" -client -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginManagerCliParser $arg\n```\n\nto\n\n```\neval \"$JAVA\" $JAVA_OPTS -client -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginManagerCliParser $arg\n```\n", "comments": [ { "body": "Relates #17121. We will support `ES_JAVA_OPTS` for this script.\n", "created_at": "2016-03-15T20:19:24Z" } ], "number": 16790, "title": "plugin install script not honouring JAVA_OPTS anymore" }
{ "body": "This commit adds support for ES_JAVA_OPTS to the elasticsearch-plugin\nscript.\n\nCloses #16790\n", "number": 18140, "review_comments": [], "title": "Pass ES_JAVA_OPTS to JVM for plugins script" }
{ "commits": [ { "message": "Pass ES_JAVA_OPTS to JVM for plugins script\n\nThis commit adds support for ES_JAVA_OPTS to the elasticsearch-plugin\nscript." } ], "files": [ { "diff": "@@ -110,4 +110,4 @@ fi\n HOSTNAME=`hostname | cut -d. -f1`\n export HOSTNAME\n \n-eval \"\\\"$JAVA\\\"\" -client -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginCli $args\n+eval \"\\\"$JAVA\\\"\" \"$ES_JAVA_OPTS\" -client -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginCli $args", "filename": "distribution/src/main/resources/bin/elasticsearch-plugin", "status": "modified" }, { "diff": "@@ -48,7 +48,7 @@ GOTO loop\n \n SET HOSTNAME=%COMPUTERNAME%\n \n-\"%JAVA_HOME%\\bin\\java\" -client -Des.path.home=\"%ES_HOME%\" !properties! -cp \"%ES_HOME%/lib/*;\" \"org.elasticsearch.plugins.PluginCli\" !args!\n+\"%JAVA_HOME%\\bin\\java\" %ES_JAVA_OPTS% -client -Des.path.home=\"%ES_HOME%\" !properties! -cp \"%ES_HOME%/lib/*;\" \"org.elasticsearch.plugins.PluginCli\" !args!\n goto finally\n \n ", "filename": "distribution/src/main/resources/bin/elasticsearch-plugin.bat", "status": "modified" }, { "diff": "@@ -476,3 +476,15 @@ fi\n # restore JAVA_HOME\n export JAVA_HOME=$java_home\n }\n+\n+@test \"[$GROUP] test ES_JAVA_OPTS\" {\n+ # preserve ES_JAVA_OPTS\n+ local es_java_opts=$ES_JAVA_OPTS\n+\n+ export ES_JAVA_OPTS=\"-XX:+PrintFlagsFinal\"\n+ # this will fail if ES_JAVA_OPTS is not passed through\n+ \"$ESHOME/bin/elasticsearch-plugin\" list | grep MaxHeapSize\n+\n+ # restore ES_JAVA_OPTS\n+ export ES_JAVA_OPTS=$es_java_opts\n+}", "filename": "qa/vagrant/src/test/resources/packaging/scripts/module_and_plugin_test_cases.bash", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.3.2\n\n**JVM version**: jdk1.8.0_60\n\n**OS version**: Linux RedHat Enterprise 7.2\n\n**Description of the problem including expected versus actual behavior**:\nWhen enable HTTP compression, CORS stops to work, and the elasticsearch present this error in the log file.\n\n**Steps to reproduce**:\n1. enable HTTP compression\n2. enable CORS\n3. try to use elasticsearch and error below heppens.\n\n**Provide logs (if relevant)**:\n[2016-04-27 15:57:16,312][WARN ][http.netty ] [node_1_data] Caught exception while handling client http traffic, closing connection [id: 0x246f9cb0, /172.30.XXX.XX:40855 => /172.30.XXX.XX:9200]\njava.lang.IllegalStateException: cannot send more responses than requests\nat org.jboss.netty.handler.codec.http.HttpContentEncoder.writeRequested(HttpContentEncoder.java:101)\nat org.jboss.netty.channel.SimpleChannelHandler.handleDownstream(SimpleChannelHandler.java:254)\nat org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)\nat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784)\nat org.jboss.netty.channel.SimpleChannelHandler.writeRequested(SimpleChannelHandler.java:292)\nat org.jboss.netty.channel.SimpleChannelHandler.handleDownstream(SimpleChannelHandler.java:254)\nat org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:105)\nat org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)\nat org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)\nat org.jboss.netty.channel.Channels.write(Channels.java:704)\nat org.jboss.netty.channel.Channels.write(Channels.java:671)\nat org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:348)\nat org.elasticsearch.http.netty.cors.CorsHandler.handlePreflight(CorsHandler.java:123)\nat org.elasticsearch.http.netty.cors.CorsHandler.messageReceived(CorsHandler.java:80)\nat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\nat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\nat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\nat org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)\nat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\nat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\nat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\nat org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)\nat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\nat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\nat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\nat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\nat org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)\nat org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)\nat org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)\nat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\nat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\nat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\nat org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)\nat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\nat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\nat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\nat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\nat org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\nat org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\nat org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)\nat org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\nat org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\nat org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\nat org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\nat java.lang.Thread.run(Thread.java:745)\n", "comments": [ { "body": "The issue already came up while I worked on #18066 and is fixed on 5.0. But we should really fix this for 2.x too (although without applying the mentioned PR thus avoiding to change also the HTTP compression defaults). The root cause is that the CORS handler is coming \"too early\" in the pipeline of `NettyHttpServerTransport`. The request must be processed by `ESHttpResponseEncoder` first.\n", "created_at": "2016-05-02T15:00:03Z" }, { "body": "Closed by #18101 (included in Elasticsearch 2.3.3 and Elasticsearch 2.4.0). Will be fixed in Elasticsearch 5.0 by #18066.\n", "created_at": "2016-05-03T06:42:52Z" } ], "number": 18089, "title": "CORS won't work currectly when HTTP compression is enabled" }
{ "body": "With this commit we fix an issue that prevented to turn on\nHTTP compression when using CORS. The problem boiled down\nto a problematic order of the respective handlers in\nNetty's pipeline.\n\nNote that the same problem is already fixed in ES 5.0 by #18066.\n\nRelates #18066\nFixes #18089\n", "number": 18101, "review_comments": [], "title": "Allow CORS requests to work with HTTP compression enabled" }
{ "commits": [ { "message": "Allow CORS requests to work with HTTP compression enabled\n\nWith this commit we fix an issue that prevented to turn on\nHTTP compression when using CORS. The problem boiled down\nto a problematic order of the respective handlers in\nNetty's pipeline.\n\nNote that the same problem is already fixed in ES 5.0 by\n\nRelates #18066\nFixes #18089" } ], "files": [ { "diff": "@@ -512,13 +512,13 @@ public ChannelPipeline getPipeline() throws Exception {\n httpChunkAggregator.setMaxCumulationBufferComponents(transport.maxCompositeBufferComponents);\n }\n pipeline.addLast(\"aggregator\", httpChunkAggregator);\n- if (transport.settings().getAsBoolean(SETTING_CORS_ENABLED, false)) {\n- pipeline.addLast(\"cors\", new CorsHandler(transport.getCorsConfig()));\n- }\n pipeline.addLast(\"encoder\", new ESHttpResponseEncoder());\n if (transport.compression) {\n pipeline.addLast(\"encoder_compress\", new HttpContentCompressor(transport.compressionLevel));\n }\n+ if (transport.settings().getAsBoolean(SETTING_CORS_ENABLED, false)) {\n+ pipeline.addLast(\"cors\", new CorsHandler(transport.getCorsConfig()));\n+ }\n if (transport.pipelining) {\n pipeline.addLast(\"pipelining\", new HttpPipeliningHandler(transport.pipeliningMaxEvents));\n }", "filename": "core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_CREDENTIALS;\n import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_METHODS;\n import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ENABLED;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_HTTP_COMPRESSION;\n import static org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import static org.elasticsearch.test.ESIntegTestCase.Scope;\n import static org.hamcrest.Matchers.*;\n@@ -52,6 +53,7 @@ protected Settings nodeSettings(int nodeOrdinal) {\n .put(SETTING_CORS_ALLOW_METHODS, \"get, options, post\")\n .put(SETTING_CORS_ENABLED, true)\n .put(Node.HTTP_ENABLED, true)\n+ .put(SETTING_HTTP_COMPRESSION, randomBoolean())\n .build();\n }\n ", "filename": "core/src/test/java/org/elasticsearch/rest/CorsRegexIT.java", "status": "modified" } ] }
{ "body": "```\nPUT t \n{\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"date\": {\n \"type\": \"date\"\n }\n }\n }\n }\n}\n\nPUT t/t/1\n{\n \"date\": null\n}\n```\n\nReturns:\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"mapper_parsing_exception\",\n \"reason\": \"failed to parse [date]\"\n }\n ],\n \"type\": \"mapper_parsing_exception\",\n \"reason\": \"failed to parse [date]\",\n \"caused_by\": {\n \"type\": \"illegal_state_exception\",\n \"reason\": \"Can't get text on a VALUE_NULL at 2:11\"\n }\n },\n \"status\": 400\n}\n```\n", "comments": [ { "body": "Not reproduces in 5.0.0-alpha1 release.\n", "created_at": "2016-05-02T13:48:51Z" }, { "body": "@nikoncode indeed, this is related to a change that only made it to alpha2\n", "created_at": "2016-05-04T08:47:11Z" } ], "number": 18085, "title": "Can't set dates to null" }
{ "body": "This was mostly untested and had some bugs.\n\nCloses #18085\n", "number": 18090, "review_comments": [], "title": "Fix and test handling of `null_value`." }
{ "commits": [ { "message": "Fix and test handling of `null_value`. #18090\n\nThis was mostly untested and had some bugs.\n\nCloses #18085" } ], "files": [ { "diff": "@@ -36,7 +36,6 @@\n import org.elasticsearch.common.joda.DateMathParser;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n-import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.common.util.LocaleUtils;\n@@ -152,7 +151,7 @@ public Mapper.Builder<?,?> parse(String name, Map<String, Object> node, ParserCo\n if (propNode == null) {\n throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n }\n- builder.nullValue(InetAddresses.forString(propNode.toString()));\n+ builder.nullValue(propNode.toString());\n iterator.remove();\n } else if (propName.equals(\"ignore_malformed\")) {\n builder.ignoreMalformed(TypeParsers.nodeBooleanValue(\"ignore_malformed\", propNode, parserContext));\n@@ -561,7 +560,7 @@ protected void parseCreateField(ParseContext context, List<Field> fields) throws\n dateAsString = dateAsObject.toString();\n }\n } else {\n- dateAsString = context.parser().text();\n+ dateAsString = context.parser().textOrNull();\n }\n \n if (dateAsString == null) {\n@@ -615,6 +614,11 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n if (includeDefaults || ignoreMalformed.explicit()) {\n builder.field(\"ignore_malformed\", ignoreMalformed.value());\n }\n+\n+ if (includeDefaults || fieldType().nullValue() != null) {\n+ builder.field(\"null_value\", fieldType().nullValueAsString());\n+ }\n+\n if (includeInAll != null) {\n builder.field(\"include_in_all\", includeInAll);\n } else if (includeDefaults) {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -366,8 +366,15 @@ FieldStats.Double stats(IndexReader reader, String fieldName,\n BYTE(\"byte\", NumericType.BYTE) {\n @Override\n Byte parse(Object value) {\n- if (value instanceof Byte) {\n- return (Byte) value;\n+ if (value instanceof Number) {\n+ double doubleValue = ((Number) value).doubleValue();\n+ if (doubleValue < Byte.MIN_VALUE || doubleValue > Byte.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a byte\");\n+ }\n+ if (doubleValue % 1 != 0) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n+ }\n+ return ((Number) value).byteValue();\n }\n if (value instanceof BytesRef) {\n value = ((BytesRef) value).utf8ToString();\n@@ -426,6 +433,13 @@ Number valueForSearch(Number value) {\n @Override\n Short parse(Object value) {\n if (value instanceof Number) {\n+ double doubleValue = ((Number) value).doubleValue();\n+ if (doubleValue < Short.MIN_VALUE || doubleValue > Short.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a short\");\n+ }\n+ if (doubleValue % 1 != 0) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n+ }\n return ((Number) value).shortValue();\n }\n if (value instanceof BytesRef) {\n@@ -485,6 +499,13 @@ Number valueForSearch(Number value) {\n @Override\n Integer parse(Object value) {\n if (value instanceof Number) {\n+ double doubleValue = ((Number) value).doubleValue();\n+ if (doubleValue < Integer.MIN_VALUE || doubleValue > Integer.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for an integer\");\n+ }\n+ if (doubleValue % 1 != 0) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n+ }\n return ((Number) value).intValue();\n }\n if (value instanceof BytesRef) {\n@@ -581,6 +602,13 @@ FieldStats.Long stats(IndexReader reader, String fieldName,\n @Override\n Long parse(Object value) {\n if (value instanceof Number) {\n+ double doubleValue = ((Number) value).doubleValue();\n+ if (doubleValue < Long.MIN_VALUE || doubleValue > Long.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a long\");\n+ }\n+ if (doubleValue % 1 != 0) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n+ }\n return ((Number) value).longValue();\n }\n if (value instanceof BytesRef) {\n@@ -944,6 +972,11 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n if (includeDefaults || coerce.explicit()) {\n builder.field(\"coerce\", coerce.value());\n }\n+\n+ if (includeDefaults || fieldType().nullValue() != null) {\n+ builder.field(\"null_value\", fieldType().nullValue());\n+ }\n+\n if (includeInAll != null) {\n builder.field(\"include_in_all\", includeInAll);\n } else if (includeDefaults) {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/NumberFieldMapper.java", "status": "modified" }, { "diff": "@@ -339,7 +339,7 @@ protected void parseCreateField(ParseContext context, List<Field> fields) throws\n if (context.externalValueSet()) {\n addressAsObject = context.externalValue();\n } else {\n- addressAsObject = context.parser().text();\n+ addressAsObject = context.parser().textOrNull();\n }\n \n if (addressAsObject == null) {\n@@ -395,6 +395,10 @@ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n super.doXContentBody(builder, includeDefaults, params);\n \n+ if (includeDefaults || fieldType().nullValue() != null) {\n+ builder.field(\"null_value\", InetAddresses.toAddrString((InetAddress) fieldType().nullValue()));\n+ }\n+\n if (includeDefaults || ignoreMalformed.explicit()) {\n builder.field(\"ignore_malformed\", ignoreMalformed.value());\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java", "status": "modified" }, { "diff": "@@ -251,4 +251,55 @@ public void testChangeLocale() throws IOException {\n .endObject()\n .bytes());\n }\n+\n+ public void testNullValue() throws IOException {\n+ String mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", \"date\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .nullField(\"field\")\n+ .endObject()\n+ .bytes());\n+ assertArrayEquals(new IndexableField[0], doc.rootDoc().getFields(\"field\"));\n+\n+ mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", \"date\")\n+ .field(\"null_value\", \"2016-03-11\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ doc = mapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .nullField(\"field\")\n+ .endObject()\n+ .bytes());\n+ IndexableField[] fields = doc.rootDoc().getFields(\"field\");\n+ assertEquals(2, fields.length);\n+ IndexableField pointField = fields[0];\n+ assertEquals(1, pointField.fieldType().pointDimensionCount());\n+ assertEquals(8, pointField.fieldType().pointNumBytes());\n+ assertFalse(pointField.fieldType().stored());\n+ assertEquals(1457654400000L, pointField.numericValue().longValue());\n+ IndexableField dvField = fields[1];\n+ assertEquals(DocValuesType.SORTED_NUMERIC, dvField.fieldType().docValuesType());\n+ assertEquals(1457654400000L, dvField.numericValue().longValue());\n+ assertFalse(dvField.fieldType().stored());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/DateFieldMapperTests.java", "status": "modified" }, { "diff": "@@ -126,14 +126,28 @@ public void testIgnoreAbove() throws IOException {\n \n public void testNullValue() throws IOException {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\").field(\"type\", \"keyword\").field(\"null_value\", \"uri\").endObject().endObject()\n+ .startObject(\"properties\").startObject(\"field\").field(\"type\", \"keyword\").endObject().endObject()\n .endObject().endObject().string();\n \n DocumentMapper mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n-\n assertEquals(mapping, mapper.mappingSource().toString());\n \n ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .nullField(\"field\")\n+ .endObject()\n+ .bytes());\n+ assertArrayEquals(new IndexableField[0], doc.rootDoc().getFields(\"field\"));\n+\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"field\").field(\"type\", \"keyword\").field(\"null_value\", \"uri\").endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n+\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ doc = mapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n .startObject()\n .endObject()\n .bytes());", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/KeywordFieldMapperTests.java", "status": "modified" }, { "diff": "@@ -316,4 +316,65 @@ public void testRejectNorms() throws IOException {\n assertThat(e.getMessage(), containsString(\"Mapping definition for [foo] has unsupported parameters: [norms\"));\n }\n }\n+\n+ public void testNullValue() throws IOException {\n+ for (String type : TYPES) {\n+ doTestNullValue(type);\n+ }\n+ }\n+\n+ private void doTestNullValue(String type) throws IOException {\n+ String mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", type)\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .nullField(\"field\")\n+ .endObject()\n+ .bytes());\n+ assertArrayEquals(new IndexableField[0], doc.rootDoc().getFields(\"field\"));\n+\n+ Object missing;\n+ if (Arrays.asList(\"float\", \"double\").contains(type)) {\n+ missing = 123d;\n+ } else {\n+ missing = 123L;\n+ }\n+ mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", type)\n+ .field(\"null_value\", missing)\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ doc = mapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .nullField(\"field\")\n+ .endObject()\n+ .bytes());\n+ IndexableField[] fields = doc.rootDoc().getFields(\"field\");\n+ assertEquals(2, fields.length);\n+ IndexableField pointField = fields[0];\n+ assertEquals(1, pointField.fieldType().pointDimensionCount());\n+ assertFalse(pointField.fieldType().stored());\n+ assertEquals(123, pointField.numericValue().doubleValue(), 0d);\n+ IndexableField dvField = fields[1];\n+ assertEquals(DocValuesType.SORTED_NUMERIC, dvField.fieldType().docValuesType());\n+ assertFalse(dvField.fieldType().stored());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/NumberFieldMapperTests.java", "status": "modified" }, { "diff": "@@ -75,4 +75,35 @@ public void testRangeQuery() {\n () -> ft.rangeQuery(\"1\", \"3\", true, true));\n assertEquals(\"Cannot search on field [field] since it is not indexed.\", e.getMessage());\n }\n+\n+ public void testConversions() {\n+ assertEquals((byte) 3, NumberType.BYTE.parse(3d));\n+ assertEquals((short) 3, NumberType.SHORT.parse(3d));\n+ assertEquals(3, NumberType.INTEGER.parse(3d));\n+ assertEquals(3L, NumberType.LONG.parse(3d));\n+ assertEquals(3f, NumberType.FLOAT.parse(3d));\n+ assertEquals(3d, NumberType.DOUBLE.parse(3d));\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> NumberType.BYTE.parse(3.5));\n+ assertEquals(\"Value [3.5] has a decimal part\", e.getMessage());\n+ e = expectThrows(IllegalArgumentException.class, () -> NumberType.SHORT.parse(3.5));\n+ assertEquals(\"Value [3.5] has a decimal part\", e.getMessage());\n+ e = expectThrows(IllegalArgumentException.class, () -> NumberType.INTEGER.parse(3.5));\n+ assertEquals(\"Value [3.5] has a decimal part\", e.getMessage());\n+ e = expectThrows(IllegalArgumentException.class, () -> NumberType.LONG.parse(3.5));\n+ assertEquals(\"Value [3.5] has a decimal part\", e.getMessage());\n+ assertEquals(3.5f, NumberType.FLOAT.parse(3.5));\n+ assertEquals(3.5d, NumberType.DOUBLE.parse(3.5));\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> NumberType.BYTE.parse(128));\n+ assertEquals(\"Value [128] is out of range for a byte\", e.getMessage());\n+ e = expectThrows(IllegalArgumentException.class, () -> NumberType.SHORT.parse(65536));\n+ assertEquals(\"Value [65536] is out of range for a short\", e.getMessage());\n+ e = expectThrows(IllegalArgumentException.class, () -> NumberType.INTEGER.parse(2147483648L));\n+ assertEquals(\"Value [2147483648] is out of range for an integer\", e.getMessage());\n+ e = expectThrows(IllegalArgumentException.class, () -> NumberType.LONG.parse(10000000000000000000d));\n+ assertEquals(\"Value [1.0E19] is out of range for a long\", e.getMessage());\n+ assertEquals(1.1f, NumberType.FLOAT.parse(1.1)); // accuracy loss is expected\n+ assertEquals(1.1d, NumberType.DOUBLE.parse(1.1));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/NumberFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -36,6 +36,7 @@\n \n import static org.hamcrest.Matchers.containsString;\n \n+import java.io.IOException;\n import java.net.InetAddress;\n \n public class IpFieldMapperTests extends ESSingleNodeTestCase {\n@@ -217,4 +218,55 @@ public void testIncludeInAll() throws Exception {\n fields = doc.rootDoc().getFields(\"_all\");\n assertEquals(0, fields.length);\n }\n+\n+ public void testNullValue() throws IOException {\n+ String mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", \"ip\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .nullField(\"field\")\n+ .endObject()\n+ .bytes());\n+ assertArrayEquals(new IndexableField[0], doc.rootDoc().getFields(\"field\"));\n+\n+ mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", \"ip\")\n+ .field(\"null_value\", \"::1\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ doc = mapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .nullField(\"field\")\n+ .endObject()\n+ .bytes());\n+ IndexableField[] fields = doc.rootDoc().getFields(\"field\");\n+ assertEquals(2, fields.length);\n+ IndexableField pointField = fields[0];\n+ assertEquals(1, pointField.fieldType().pointDimensionCount());\n+ assertEquals(16, pointField.fieldType().pointNumBytes());\n+ assertFalse(pointField.fieldType().stored());\n+ assertEquals(new BytesRef(InetAddressPoint.encode(InetAddresses.forString(\"::1\"))), pointField.binaryValue());\n+ IndexableField dvField = fields[1];\n+ assertEquals(DocValuesType.SORTED_SET, dvField.fieldType().docValuesType());\n+ assertEquals(new BytesRef(InetAddressPoint.encode(InetAddresses.forString(\"::1\"))), dvField.binaryValue());\n+ assertFalse(dvField.fieldType().stored());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/ip/IpFieldMapperTests.java", "status": "modified" } ] }
{ "body": "Putting this mapping:\n\n```\nPUT t \n{\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"index\": false\n }\n }\n }\n }\n}\n```\n\nResults in:\n\n```\n{\n \"t\": {\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"keyword\"\n }\n }\n }\n }\n }\n}\n```\n", "comments": [ { "body": "I suspect the parser gets confused by the fact that it mixes new/old syntax (`index:false` instead of `index:no` on a legacy `string` field).\n", "created_at": "2016-04-29T08:56:14Z" }, { "body": "What is the expected behaviour? I think it should throw an error?\n", "created_at": "2016-04-29T08:59:23Z" }, { "body": "I'm ok with throwing the error, as at least it doesn't fail silently. The alternative would probably be more complex\n", "created_at": "2016-04-29T09:02:32Z" } ], "number": 18062, "title": "String with index:false results in keyword with index:true" }
{ "body": "Closes #18062\n", "number": 18082, "review_comments": [], "title": "Fail automatic string upgrade if the value of `index` is not recognized." }
{ "commits": [], "files": [] }
{ "body": "I've seen this a few times in the last few days. This is a fresh build off of master (72fb93e61220550d36eb5b63227cbb86df8e4a72) and a fresh data directory. The node was started, had some data indexed into it, deleted those indices, and shutdown the node, and then started the node again and saw this stacktrace on startup:\n\n```\n[2016-04-28 19:13:59,265][WARN ][cluster.service ] [Ape-Man] failed to notify ClusterStateListener \njava.lang.IllegalStateException: Cannot delete index [[data/czEFA9CDRWSpUdU0GPzSIw]], it is still part of the cluster state.\n at org.elasticsearch.indices.IndicesService.verifyIndexIsDeleted(IndicesService.java:686)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyDeletedIndices(IndicesClusterStateService.java:250)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:183)\n at org.elasticsearch.cluster.service.ClusterService.runTasksForExecutor(ClusterService.java:652)\n at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:814)\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:392)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:237)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:200)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nAnother observation is that there is _not_ a folder on disk with that UUID.\n", "comments": [ { "body": "There is a tombstone for this index in the cluster state:\n\n```\n {\n \"index\" : {\n \"index_name\" : \"data\",\n \"index_uuid\" : \"czEFA9CDRWSpUdU0GPzSIw\"\n },\n \"delete_date_in_millis\": 1461885168863\n }\n```\n", "created_at": "2016-04-28T23:28:18Z" }, { "body": "Here's one more observation: creating an index with the same name as the deleted index appears to be related to the message here. In particular, if I delete the second index (with the same name) then that message does not appear. But if I create the index again (so a third time), the message does.\n", "created_at": "2016-04-28T23:35:55Z" } ], "number": 18054, "title": "Cannot delete index, it is still part of the cluster state" }
{ "body": "When checking if an index tombstone can be applied, use both the index name and uuid because the cluster state may contain an active index of the same name (but different uuid).\n\nCloses #18054\n", "number": 18058, "review_comments": [], "title": "Fix issue with tombstones matching active indices in cluster state" }
{ "commits": [ { "message": "When checking if an index tombstone can be applied, use both the index\nname and uuid because the cluster state may contain an active index of\nthe same name (but different uuid).\n\nCloses #18058\nCloses #18054" } ], "files": [ { "diff": "@@ -681,8 +681,8 @@ public boolean canDeleteIndexContents(Index index, IndexSettings indexSettings)\n */\n @Nullable\n public IndexMetaData verifyIndexIsDeleted(final Index index, final ClusterState clusterState) {\n- // this method should only be called when we know the index is not part of the cluster state\n- if (clusterState.metaData().hasIndex(index.getName())) {\n+ // this method should only be called when we know the index (name + uuid) is not part of the cluster state\n+ if (clusterState.metaData().index(index) != null) {\n throw new IllegalStateException(\"Cannot delete index [\" + index + \"], it is still part of the cluster state.\");\n }\n if (nodeEnv.hasNodeFile() && FileSystemUtils.exists(nodeEnv.indexPaths(index))) {", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -21,8 +21,10 @@\n import org.apache.lucene.store.LockObtainFailedException;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;\n+import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.AliasAction;\n+import org.elasticsearch.cluster.metadata.IndexGraveyard;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.service.ClusterService;\n@@ -283,6 +285,36 @@ public void testDanglingIndicesWithAliasConflict() throws Exception {\n indicesService.deleteIndex(test.index(), \"finished with test\");\n }\n \n+ /**\n+ * This test checks an edge case where, if a node had an index (lets call it A with UUID 1), then\n+ * deleted it (so a tombstone entry for A will exist in the cluster state), then created\n+ * a new index A with UUID 2, then shutdown, when the node comes back online, it will look at the\n+ * tombstones for deletions, and it should proceed with trying to delete A with UUID 1 and not\n+ * throw any errors that the index still exists in the cluster state. This is a case of ensuring\n+ * that tombstones that have the same name as current valid indices don't cause confusion by\n+ * trying to delete an index that exists.\n+ * See https://github.com/elastic/elasticsearch/issues/18054\n+ */\n+ public void testIndexAndTombstoneWithSameNameOnStartup() throws Exception {\n+ final String indexName = \"test\";\n+ final Index index = new Index(indexName, UUIDs.randomBase64UUID());\n+ final IndicesService indicesService = getIndicesService();\n+ final Settings idxSettings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .build();\n+ final IndexMetaData indexMetaData = new IndexMetaData.Builder(index.getName())\n+ .settings(idxSettings)\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)\n+ .build();\n+ final Index tombstonedIndex = new Index(indexName, UUIDs.randomBase64UUID());\n+ final IndexGraveyard graveyard = IndexGraveyard.builder().addTombstone(tombstonedIndex).build();\n+ final MetaData metaData = MetaData.builder().put(indexMetaData, true).indexGraveyard(graveyard).build();\n+ final ClusterState clusterState = new ClusterState.Builder(new ClusterName(\"testCluster\")).metaData(metaData).build();\n+ // if all goes well, this won't throw an exception, otherwise, it will throw an IllegalStateException\n+ indicesService.verifyIndexIsDeleted(tombstonedIndex, clusterState);\n+ }\n+\n private static class DanglingListener implements LocalAllocateDangledIndices.Listener {\n final CountDownLatch latch = new CountDownLatch(1);\n ", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesServiceTests.java", "status": "modified" } ] }
{ "body": "The static helpers on PointValues throws an exception if a leaf has points indexed but not for the given field. \nWhen using this function in es the exception is never caught which could lead to error in search request or field stats request.\nThe problem has been fixed in Lucene: https://issues.apache.org/jira/browse/LUCENE-7257 but we agreed with @jpountz that we should have a temporary fix in es that we could remove afterward.\nThe proposal is to have a XPointValues which acts exactly like the PointValues with the LUCENE-7257 patch and to use this class to call all the static helpers in es.\n", "comments": [], "number": 18010, "title": "Uncaught exception thrown from PointValues static functions" }
{ "body": "Forked utility methods from Lucene's PointValues until LUCENE-7257 is released.\nReplace PointValues with XPointValues where needed.\nFixes #18010\n", "number": 18011, "review_comments": [], "title": "Add XPointValues" }
{ "commits": [ { "message": "Add XPointValues: forked utility methods from Lucene's PointValues until LUCENE-7257 is released.\nReplace PointValues with XPointValues where needed.\nFixes #18010" } ], "files": [ { "diff": "@@ -0,0 +1,130 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.lucene.index;\n+import org.apache.lucene.util.StringHelper;\n+\n+import java.io.IOException;\n+\n+/**\n+ * Forked utility methods from Lucene's PointValues until LUCENE-7257 is released.\n+ */\n+public class XPointValues {\n+ /** Return the cumulated number of points across all leaves of the given\n+ * {@link IndexReader}. Leaves that do not have points for the given field\n+ * are ignored.\n+ * @see PointValues#size(String) */\n+ public static long size(IndexReader reader, String field) throws IOException {\n+ long size = 0;\n+ for (LeafReaderContext ctx : reader.leaves()) {\n+ FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field);\n+ if (info == null || info.getPointDimensionCount() == 0) {\n+ continue;\n+ }\n+ PointValues values = ctx.reader().getPointValues();\n+ size += values.size(field);\n+ }\n+ return size;\n+ }\n+\n+ /** Return the cumulated number of docs that have points across all leaves\n+ * of the given {@link IndexReader}. Leaves that do not have points for the\n+ * given field are ignored.\n+ * @see PointValues#getDocCount(String) */\n+ public static int getDocCount(IndexReader reader, String field) throws IOException {\n+ int count = 0;\n+ for (LeafReaderContext ctx : reader.leaves()) {\n+ FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field);\n+ if (info == null || info.getPointDimensionCount() == 0) {\n+ continue;\n+ }\n+ PointValues values = ctx.reader().getPointValues();\n+ count += values.getDocCount(field);\n+ }\n+ return count;\n+ }\n+\n+ /** Return the minimum packed values across all leaves of the given\n+ * {@link IndexReader}. Leaves that do not have points for the given field\n+ * are ignored.\n+ * @see PointValues#getMinPackedValue(String) */\n+ public static byte[] getMinPackedValue(IndexReader reader, String field) throws IOException {\n+ byte[] minValue = null;\n+ for (LeafReaderContext ctx : reader.leaves()) {\n+ FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field);\n+ if (info == null || info.getPointDimensionCount() == 0) {\n+ continue;\n+ }\n+ PointValues values = ctx.reader().getPointValues();\n+ byte[] leafMinValue = values.getMinPackedValue(field);\n+ if (leafMinValue == null) {\n+ continue;\n+ }\n+ if (minValue == null) {\n+ minValue = leafMinValue.clone();\n+ } else {\n+ final int numDimensions = values.getNumDimensions(field);\n+ final int numBytesPerDimension = values.getBytesPerDimension(field);\n+ for (int i = 0; i < numDimensions; ++i) {\n+ int offset = i * numBytesPerDimension;\n+ if (StringHelper.compare(numBytesPerDimension, leafMinValue, offset, minValue, offset) < 0) {\n+ System.arraycopy(leafMinValue, offset, minValue, offset, numBytesPerDimension);\n+ }\n+ }\n+ }\n+ }\n+ return minValue;\n+ }\n+\n+ /** Return the maximum packed values across all leaves of the given\n+ * {@link IndexReader}. Leaves that do not have points for the given field\n+ * are ignored.\n+ * @see PointValues#getMaxPackedValue(String) */\n+ public static byte[] getMaxPackedValue(IndexReader reader, String field) throws IOException {\n+ byte[] maxValue = null;\n+ for (LeafReaderContext ctx : reader.leaves()) {\n+ FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field);\n+ if (info == null || info.getPointDimensionCount() == 0) {\n+ continue;\n+ }\n+ PointValues values = ctx.reader().getPointValues();\n+ byte[] leafMaxValue = values.getMaxPackedValue(field);\n+ if (leafMaxValue == null) {\n+ continue;\n+ }\n+ if (maxValue == null) {\n+ maxValue = leafMaxValue.clone();\n+ } else {\n+ final int numDimensions = values.getNumDimensions(field);\n+ final int numBytesPerDimension = values.getBytesPerDimension(field);\n+ for (int i = 0; i < numDimensions; ++i) {\n+ int offset = i * numBytesPerDimension;\n+ if (StringHelper.compare(numBytesPerDimension, leafMaxValue, offset, maxValue, offset) > 0) {\n+ System.arraycopy(leafMaxValue, offset, maxValue, offset, numBytesPerDimension);\n+ }\n+ }\n+ }\n+ }\n+ return maxValue;\n+ }\n+\n+ /** Default constructor */\n+ private XPointValues() {\n+ }\n+}", "filename": "core/src/main/java/org/apache/lucene/index/XPointValues.java", "status": "added" }, { "diff": "@@ -20,12 +20,12 @@\n package org.elasticsearch.index.mapper.core;\n \n import org.apache.lucene.document.Field;\n-import org.apache.lucene.document.LongPoint;\n-import org.apache.lucene.document.SortedNumericDocValuesField;\n import org.apache.lucene.document.StoredField;\n+import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.document.LongPoint;\n+import org.apache.lucene.index.XPointValues;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexReader;\n-import org.apache.lucene.index.PointValues;\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n@@ -394,13 +394,13 @@ private static Callable<Long> now() {\n @Override\n public FieldStats.Date stats(IndexReader reader) throws IOException {\n String field = name();\n- long size = PointValues.size(reader, field);\n+ long size = XPointValues.size(reader, field);\n if (size == 0) {\n return new FieldStats.Date(reader.maxDoc(), isSearchable(), isAggregatable(), dateTimeFormatter());\n }\n- int docCount = PointValues.getDocCount(reader, field);\n- byte[] min = PointValues.getMinPackedValue(reader, field);\n- byte[] max = PointValues.getMaxPackedValue(reader, field);\n+ int docCount = XPointValues.getDocCount(reader, field);\n+ byte[] min = XPointValues.getMinPackedValue(reader, field);\n+ byte[] max = XPointValues.getMaxPackedValue(reader, field);\n return new FieldStats.Date(reader.maxDoc(),docCount, -1L, size,\n isSearchable(), isAggregatable(),\n dateTimeFormatter(), LongPoint.decodeDimension(min, 0), LongPoint.decodeDimension(max, 0));\n@@ -415,13 +415,13 @@ public Relation isFieldWithinQuery(IndexReader reader,\n dateParser = this.dateMathParser;\n }\n \n- if (PointValues.size(reader, name()) == 0) {\n+ if (XPointValues.size(reader, name()) == 0) {\n // no points, so nothing matches\n return Relation.DISJOINT;\n }\n \n- long minValue = LongPoint.decodeDimension(PointValues.getMinPackedValue(reader, name()), 0);\n- long maxValue = LongPoint.decodeDimension(PointValues.getMaxPackedValue(reader, name()), 0);\n+ long minValue = LongPoint.decodeDimension(XPointValues.getMinPackedValue(reader, name()), 0);\n+ long maxValue = LongPoint.decodeDimension(XPointValues.getMaxPackedValue(reader, name()), 0);\n \n long fromInclusive = Long.MIN_VALUE;\n if (from != null) {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -28,7 +28,7 @@\n import org.apache.lucene.document.StoredField;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexReader;\n-import org.apache.lucene.index.PointValues;\n+import org.apache.lucene.index.XPointValues;\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n@@ -260,13 +260,13 @@ public List<Field> createFields(String name, Number value,\n @Override\n FieldStats.Double stats(IndexReader reader, String fieldName,\n boolean isSearchable, boolean isAggregatable) throws IOException {\n- long size = PointValues.size(reader, fieldName);\n+ long size = XPointValues.size(reader, fieldName);\n if (size == 0) {\n return new FieldStats.Double(reader.maxDoc(), isSearchable, isAggregatable);\n }\n- int docCount = PointValues.getDocCount(reader, fieldName);\n- byte[] min = PointValues.getMinPackedValue(reader, fieldName);\n- byte[] max = PointValues.getMaxPackedValue(reader, fieldName);\n+ int docCount = XPointValues.getDocCount(reader, fieldName);\n+ byte[] min = XPointValues.getMinPackedValue(reader, fieldName);\n+ byte[] max = XPointValues.getMaxPackedValue(reader, fieldName);\n return new FieldStats.Double(reader.maxDoc(),docCount, -1L, size,\n isSearchable, isAggregatable,\n FloatPoint.decodeDimension(min, 0), FloatPoint.decodeDimension(max, 0));\n@@ -351,13 +351,13 @@ public List<Field> createFields(String name, Number value,\n @Override\n FieldStats.Double stats(IndexReader reader, String fieldName,\n boolean isSearchable, boolean isAggregatable) throws IOException {\n- long size = PointValues.size(reader, fieldName);\n+ long size = XPointValues.size(reader, fieldName);\n if (size == 0) {\n return new FieldStats.Double(reader.maxDoc(), isSearchable, isAggregatable);\n }\n- int docCount = PointValues.getDocCount(reader, fieldName);\n- byte[] min = PointValues.getMinPackedValue(reader, fieldName);\n- byte[] max = PointValues.getMaxPackedValue(reader, fieldName);\n+ int docCount = XPointValues.getDocCount(reader, fieldName);\n+ byte[] min = XPointValues.getMinPackedValue(reader, fieldName);\n+ byte[] max = XPointValues.getMaxPackedValue(reader, fieldName);\n return new FieldStats.Double(reader.maxDoc(),docCount, -1L, size,\n isSearchable, isAggregatable,\n DoublePoint.decodeDimension(min, 0), DoublePoint.decodeDimension(max, 0));\n@@ -565,13 +565,13 @@ public List<Field> createFields(String name, Number value,\n @Override\n FieldStats.Long stats(IndexReader reader, String fieldName,\n boolean isSearchable, boolean isAggregatable) throws IOException {\n- long size = PointValues.size(reader, fieldName);\n+ long size = XPointValues.size(reader, fieldName);\n if (size == 0) {\n return new FieldStats.Long(reader.maxDoc(), isSearchable, isAggregatable);\n }\n- int docCount = PointValues.getDocCount(reader, fieldName);\n- byte[] min = PointValues.getMinPackedValue(reader, fieldName);\n- byte[] max = PointValues.getMaxPackedValue(reader, fieldName);\n+ int docCount = XPointValues.getDocCount(reader, fieldName);\n+ byte[] min = XPointValues.getMinPackedValue(reader, fieldName);\n+ byte[] max = XPointValues.getMaxPackedValue(reader, fieldName);\n return new FieldStats.Long(reader.maxDoc(),docCount, -1L, size,\n isSearchable, isAggregatable,\n IntPoint.decodeDimension(min, 0), IntPoint.decodeDimension(max, 0));\n@@ -661,13 +661,13 @@ public List<Field> createFields(String name, Number value,\n @Override\n FieldStats.Long stats(IndexReader reader, String fieldName,\n boolean isSearchable, boolean isAggregatable) throws IOException {\n- long size = PointValues.size(reader, fieldName);\n+ long size = XPointValues.size(reader, fieldName);\n if (size == 0) {\n return new FieldStats.Long(reader.maxDoc(), isSearchable, isAggregatable);\n }\n- int docCount = PointValues.getDocCount(reader, fieldName);\n- byte[] min = PointValues.getMinPackedValue(reader, fieldName);\n- byte[] max = PointValues.getMaxPackedValue(reader, fieldName);\n+ int docCount = XPointValues.getDocCount(reader, fieldName);\n+ byte[] min = XPointValues.getMinPackedValue(reader, fieldName);\n+ byte[] max = XPointValues.getMaxPackedValue(reader, fieldName);\n return new FieldStats.Long(reader.maxDoc(),docCount, -1L, size,\n isSearchable, isAggregatable,\n LongPoint.decodeDimension(min, 0), LongPoint.decodeDimension(max, 0));", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/NumberFieldMapper.java", "status": "modified" }, { "diff": "@@ -26,7 +26,7 @@\n import org.apache.lucene.document.XInetAddressPoint;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexReader;\n-import org.apache.lucene.index.PointValues;\n+import org.apache.lucene.index.XPointValues;\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n@@ -227,13 +227,13 @@ public Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int\n @Override\n public FieldStats.Ip stats(IndexReader reader) throws IOException {\n String field = name();\n- long size = PointValues.size(reader, field);\n+ long size = XPointValues.size(reader, field);\n if (size == 0) {\n return new FieldStats.Ip(reader.maxDoc(), isSearchable(), isAggregatable());\n }\n- int docCount = PointValues.getDocCount(reader, field);\n- byte[] min = PointValues.getMinPackedValue(reader, field);\n- byte[] max = PointValues.getMaxPackedValue(reader, field);\n+ int docCount = XPointValues.getDocCount(reader, field);\n+ byte[] min = XPointValues.getMinPackedValue(reader, field);\n+ byte[] max = XPointValues.getMaxPackedValue(reader, field);\n return new FieldStats.Ip(reader.maxDoc(), docCount, -1L, size,\n isSearchable(), isAggregatable(),\n InetAddressPoint.decode(min), InetAddressPoint.decode(max));", "filename": "core/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java", "status": "modified" }, { "diff": "@@ -113,6 +113,11 @@ public void testIsFieldWithinQuery() throws IOException {\n doTestIsFieldWithinQuery(ft, reader, null, alternateFormat);\n doTestIsFieldWithinQuery(ft, reader, DateTimeZone.UTC, null);\n doTestIsFieldWithinQuery(ft, reader, DateTimeZone.UTC, alternateFormat);\n+\n+ // Fields with no value indexed.\n+ DateFieldType ft2 = new DateFieldType();\n+ ft2.setName(\"my_date2\");\n+ assertEquals(Relation.DISJOINT, ft2.isFieldWithinQuery(reader, \"2015-10-09\", \"2016-01-02\", false, false, null, null));\n IOUtils.close(reader, w, dir);\n }\n ", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/DateFieldTypeTests.java", "status": "modified" } ] }
{ "body": "Since we switched to points to index ip addresses, a couple things are not working anymore:\n- [x] range queries only support inclusive bounds (#17777)\n- [x] range aggregations do not work anymore (#17859)\n- [x] sorting on ip addresses fails since it tries to write binary bytes as an utf8 string when rendering sort values (#17959)\n- [x] sorting and aggregations across old and new indices do not work since the coordinating node gets longs from some shards and binary values from other shards and does not know how to reconcile them (#18593)\n- [x] terms aggregations return binary keys (#18003)\n", "comments": [], "number": 17971, "title": "Not all features work on ip fields" }
{ "body": "Currently terms on an ip address try to put their binary representation in the\njson response. With this commit, they would return a formatted ip address:\n\n```\n \"buckets\": [\n {\n \"key\": \"192.168.1.7\",\n \"doc_count\": 1\n }\n ]\n```\n\nRelates to #17971\n", "number": 18003, "review_comments": [], "title": "Fix xcontent rendering of ip terms aggs." }
{ "commits": [ { "message": "Fix xcontent rendering of ip terms aggs. #18003\n\nCurrently terms on an ip address try to put their binary representation in the\njson response. With this commit, they would return a formatted ip address:\n\n```\n \"buckets\": [\n {\n \"key\": \"192.168.1.7\",\n \"doc_count\": 1\n }\n ]\n```" } ], "files": [ { "diff": "@@ -110,7 +110,7 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws\n }\n \n if (spare == null) {\n- spare = new SignificantStringTerms.Bucket(new BytesRef(), 0, 0, 0, 0, null);\n+ spare = new SignificantStringTerms.Bucket(new BytesRef(), 0, 0, 0, 0, null, format);\n }\n spare.bucketOrd = bucketOrd;\n copy(globalOrds.lookupOrd(globalTermOrd), spare.termBytes);\n@@ -135,7 +135,7 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws\n list[i] = bucket;\n }\n \n- return new SignificantStringTerms(subsetSize, supersetSize, name, bucketCountThresholds.getRequiredSize(),\n+ return new SignificantStringTerms(subsetSize, supersetSize, name, format, bucketCountThresholds.getRequiredSize(),\n bucketCountThresholds.getMinDocCount(), significanceHeuristic, Arrays.asList(list), pipelineAggregators(),\n metaData());\n }\n@@ -146,7 +146,7 @@ public SignificantStringTerms buildEmptyAggregation() {\n ContextIndexSearcher searcher = context.searchContext().searcher();\n IndexReader topReader = searcher.getIndexReader();\n int supersetSize = topReader.numDocs();\n- return new SignificantStringTerms(0, supersetSize, name, bucketCountThresholds.getRequiredSize(),\n+ return new SignificantStringTerms(0, supersetSize, name, format, bucketCountThresholds.getRequiredSize(),\n bucketCountThresholds.getMinDocCount(), significanceHeuristic,\n Collections.<InternalSignificantTerms.Bucket> emptyList(), pipelineAggregators(), metaData());\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/GlobalOrdinalsSignificantTermsAggregator.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n \n import org.elasticsearch.common.io.stream.Streamable;\n import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.Aggregations;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n@@ -33,6 +34,7 @@\n import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n+import java.util.Objects;\n \n /**\n *\n@@ -56,15 +58,19 @@ public static abstract class Bucket extends SignificantTerms.Bucket {\n long bucketOrd;\n protected InternalAggregations aggregations;\n double score;\n+ transient final DocValueFormat format;\n \n- protected Bucket(long subsetSize, long supersetSize) {\n+ protected Bucket(long subsetSize, long supersetSize, DocValueFormat format) {\n // for serialization\n super(subsetSize, supersetSize);\n+ this.format = format;\n }\n \n- protected Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize, InternalAggregations aggregations) {\n+ protected Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize,\n+ InternalAggregations aggregations, DocValueFormat format) {\n super(subsetDf, subsetSize, supersetDf, supersetSize);\n this.aggregations = aggregations;\n+ this.format = format;\n }\n \n @Override\n@@ -122,16 +128,19 @@ public double getSignificanceScore() {\n }\n }\n \n- protected InternalSignificantTerms(long subsetSize, long supersetSize, String name, int requiredSize, long minDocCount,\n- SignificanceHeuristic significanceHeuristic, List<? extends Bucket> buckets, List<PipelineAggregator> pipelineAggregators,\n- Map<String, Object> metaData) {\n+ protected DocValueFormat format;\n+\n+ protected InternalSignificantTerms(long subsetSize, long supersetSize, String name, DocValueFormat format, int requiredSize,\n+ long minDocCount, SignificanceHeuristic significanceHeuristic, List<? extends Bucket> buckets,\n+ List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) {\n super(name, pipelineAggregators, metaData);\n this.requiredSize = requiredSize;\n this.minDocCount = minDocCount;\n this.buckets = buckets;\n this.subsetSize = subsetSize;\n this.supersetSize = supersetSize;\n this.significanceHeuristic = significanceHeuristic;\n+ this.format = Objects.requireNonNull(format);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/InternalSignificantTerms.java", "status": "modified" }, { "diff": "@@ -81,18 +81,15 @@ public static void registerStreams() {\n static class Bucket extends InternalSignificantTerms.Bucket {\n \n long term;\n- private transient final DocValueFormat format;\n \n- public Bucket(long subsetSize, long supersetSize, DocValueFormat formatter) {\n- super(subsetSize, supersetSize);\n- this.format = formatter;\n+ public Bucket(long subsetSize, long supersetSize, DocValueFormat format) {\n+ super(subsetSize, supersetSize, format);\n // for serialization\n }\n \n public Bucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize, long term, InternalAggregations aggregations,\n DocValueFormat format) {\n- super(subsetDf, subsetSize, supersetDf, supersetSize, aggregations);\n- this.format = format;\n+ super(subsetDf, subsetSize, supersetDf, supersetSize, aggregations, format);\n this.term = term;\n }\n \n@@ -160,7 +157,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n return builder;\n }\n }\n- private DocValueFormat format;\n \n SignificantLongTerms() {\n } // for serialization\n@@ -169,8 +165,7 @@ public SignificantLongTerms(long subsetSize, long supersetSize, String name, Doc\n long minDocCount, SignificanceHeuristic significanceHeuristic, List<? extends InternalSignificantTerms.Bucket> buckets,\n List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) {\n \n- super(subsetSize, supersetSize, name, requiredSize, minDocCount, significanceHeuristic, buckets, pipelineAggregators, metaData);\n- this.format = Objects.requireNonNull(format);\n+ super(subsetSize, supersetSize, name, format, requiredSize, minDocCount, significanceHeuristic, buckets, pipelineAggregators, metaData);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantLongTerms.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.AggregationStreams;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n@@ -55,7 +56,8 @@ public SignificantStringTerms readResult(StreamInput in) throws IOException {\n private final static BucketStreams.Stream<Bucket> BUCKET_STREAM = new BucketStreams.Stream<Bucket>() {\n @Override\n public Bucket readResult(StreamInput in, BucketStreamContext context) throws IOException {\n- Bucket buckets = new Bucket((long) context.attributes().get(\"subsetSize\"), (long) context.attributes().get(\"supersetSize\"));\n+ Bucket buckets = new Bucket((long) context.attributes().get(\"subsetSize\"),\n+ (long) context.attributes().get(\"supersetSize\"), context.format());\n buckets.readFrom(in);\n return buckets;\n }\n@@ -84,18 +86,20 @@ public static class Bucket extends InternalSignificantTerms.Bucket {\n \n BytesRef termBytes;\n \n- public Bucket(long subsetSize, long supersetSize) {\n+ public Bucket(long subsetSize, long supersetSize, DocValueFormat format) {\n // for serialization\n- super(subsetSize, supersetSize);\n+ super(subsetSize, supersetSize, format);\n }\n \n- public Bucket(BytesRef term, long subsetDf, long subsetSize, long supersetDf, long supersetSize, InternalAggregations aggregations) {\n- super(subsetDf, subsetSize, supersetDf, supersetSize, aggregations);\n+ public Bucket(BytesRef term, long subsetDf, long subsetSize, long supersetDf, long supersetSize, InternalAggregations aggregations,\n+ DocValueFormat format) {\n+ super(subsetDf, subsetSize, supersetDf, supersetSize, aggregations, format);\n this.termBytes = term;\n }\n \n- public Bucket(BytesRef term, long subsetDf, long subsetSize, long supersetDf, long supersetSize, InternalAggregations aggregations, double score) {\n- this(term, subsetDf, subsetSize, supersetDf, supersetSize, aggregations);\n+ public Bucket(BytesRef term, long subsetDf, long subsetSize, long supersetDf, long supersetSize,\n+ InternalAggregations aggregations, double score, DocValueFormat format) {\n+ this(term, subsetDf, subsetSize, supersetDf, supersetSize, aggregations, format);\n this.score = score;\n }\n \n@@ -112,7 +116,7 @@ int compareTerm(SignificantTerms.Bucket other) {\n \n @Override\n public String getKeyAsString() {\n- return termBytes.utf8ToString();\n+ return format.format(termBytes);\n }\n \n @Override\n@@ -122,7 +126,7 @@ public String getKey() {\n \n @Override\n Bucket newBucket(long subsetDf, long subsetSize, long supersetDf, long supersetSize, InternalAggregations aggregations) {\n- return new Bucket(termBytes, subsetDf, subsetSize, supersetDf, supersetSize, aggregations);\n+ return new Bucket(termBytes, subsetDf, subsetSize, supersetDf, supersetSize, aggregations, format);\n }\n \n @Override\n@@ -146,7 +150,7 @@ public void writeTo(StreamOutput out) throws IOException {\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject();\n- builder.utf8Field(CommonFields.KEY, termBytes);\n+ builder.field(CommonFields.KEY, getKeyAsString());\n builder.field(CommonFields.DOC_COUNT, getDocCount());\n builder.field(\"score\", score);\n builder.field(\"bg_count\", supersetDf);\n@@ -158,11 +162,11 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n SignificantStringTerms() {} // for serialization\n \n- public SignificantStringTerms(long subsetSize, long supersetSize, String name, int requiredSize, long minDocCount,\n- SignificanceHeuristic significanceHeuristic, List<? extends InternalSignificantTerms.Bucket> buckets,\n+ public SignificantStringTerms(long subsetSize, long supersetSize, String name, DocValueFormat format, int requiredSize,\n+ long minDocCount, SignificanceHeuristic significanceHeuristic, List<? extends InternalSignificantTerms.Bucket> buckets,\n List<PipelineAggregator> pipelineAggregators,\n Map<String, Object> metaData) {\n- super(subsetSize, supersetSize, name, requiredSize, minDocCount, significanceHeuristic, buckets, pipelineAggregators, metaData);\n+ super(subsetSize, supersetSize, name, format, requiredSize, minDocCount, significanceHeuristic, buckets, pipelineAggregators, metaData);\n }\n \n @Override\n@@ -172,25 +176,26 @@ public Type type() {\n \n @Override\n public SignificantStringTerms create(List<SignificantStringTerms.Bucket> buckets) {\n- return new SignificantStringTerms(this.subsetSize, this.supersetSize, this.name, this.requiredSize, this.minDocCount,\n+ return new SignificantStringTerms(this.subsetSize, this.supersetSize, this.name, this.format, this.requiredSize, this.minDocCount,\n this.significanceHeuristic, buckets, this.pipelineAggregators(), this.metaData);\n }\n \n @Override\n public Bucket createBucket(InternalAggregations aggregations, SignificantStringTerms.Bucket prototype) {\n return new Bucket(prototype.termBytes, prototype.subsetDf, prototype.subsetSize, prototype.supersetDf, prototype.supersetSize,\n- aggregations);\n+ aggregations, prototype.format);\n }\n \n @Override\n protected SignificantStringTerms create(long subsetSize, long supersetSize, List<InternalSignificantTerms.Bucket> buckets,\n InternalSignificantTerms prototype) {\n- return new SignificantStringTerms(subsetSize, supersetSize, prototype.getName(), prototype.requiredSize, prototype.minDocCount,\n- prototype.significanceHeuristic, buckets, prototype.pipelineAggregators(), prototype.getMetaData());\n+ return new SignificantStringTerms(subsetSize, supersetSize, prototype.getName(), prototype.format, prototype.requiredSize,\n+ prototype.minDocCount, prototype.significanceHeuristic, buckets, prototype.pipelineAggregators(), prototype.getMetaData());\n }\n \n @Override\n protected void doReadFrom(StreamInput in) throws IOException {\n+ this.format = in.readNamedWriteable(DocValueFormat.class);\n this.requiredSize = readSize(in);\n this.minDocCount = in.readVLong();\n this.subsetSize = in.readVLong();\n@@ -199,7 +204,7 @@ protected void doReadFrom(StreamInput in) throws IOException {\n int size = in.readVInt();\n List<InternalSignificantTerms.Bucket> buckets = new ArrayList<>(size);\n for (int i = 0; i < size; i++) {\n- Bucket bucket = new Bucket(subsetSize, supersetSize);\n+ Bucket bucket = new Bucket(subsetSize, supersetSize, format);\n bucket.readFrom(in);\n buckets.add(bucket);\n }\n@@ -209,6 +214,7 @@ protected void doReadFrom(StreamInput in) throws IOException {\n \n @Override\n protected void doWriteTo(StreamOutput out) throws IOException {\n+ out.writeNamedWriteable(format);\n writeSize(requiredSize, out);\n out.writeVLong(minDocCount);\n out.writeVLong(subsetSize);", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTerms.java", "status": "modified" }, { "diff": "@@ -90,7 +90,7 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws\n }\n \n if (spare == null) {\n- spare = new SignificantStringTerms.Bucket(new BytesRef(), 0, 0, 0, 0, null);\n+ spare = new SignificantStringTerms.Bucket(new BytesRef(), 0, 0, 0, 0, null, format);\n }\n \n bucketOrds.get(i, spare.termBytes);\n@@ -117,7 +117,7 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws\n list[i] = bucket;\n }\n \n- return new SignificantStringTerms(subsetSize, supersetSize, name, bucketCountThresholds.getRequiredSize(),\n+ return new SignificantStringTerms(subsetSize, supersetSize, name, format, bucketCountThresholds.getRequiredSize(),\n bucketCountThresholds.getMinDocCount(), significanceHeuristic, Arrays.asList(list), pipelineAggregators(),\n metaData());\n }\n@@ -128,7 +128,7 @@ public SignificantStringTerms buildEmptyAggregation() {\n ContextIndexSearcher searcher = context.searchContext().searcher();\n IndexReader topReader = searcher.getIndexReader();\n int supersetSize = topReader.numDocs();\n- return new SignificantStringTerms(0, supersetSize, name, bucketCountThresholds.getRequiredSize(),\n+ return new SignificantStringTerms(0, supersetSize, name, format, bucketCountThresholds.getRequiredSize(),\n bucketCountThresholds.getMinDocCount(), significanceHeuristic,\n Collections.<InternalSignificantTerms.Bucket> emptyList(), pipelineAggregators(), metaData());\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTermsAggregator.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.AggregationStreams;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n@@ -59,8 +60,8 @@ public static void registerStreams() {\n public UnmappedSignificantTerms(String name, int requiredSize, long minDocCount, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) {\n //We pass zero for index/subset sizes because for the purpose of significant term analysis\n // we assume an unmapped index's size is irrelevant to the proceedings.\n- super(0, 0, name, requiredSize, minDocCount, SignificantTermsAggregatorBuilder.DEFAULT_SIGNIFICANCE_HEURISTIC, BUCKETS,\n- pipelineAggregators, metaData);\n+ super(0, 0, name, DocValueFormat.RAW, requiredSize, minDocCount, SignificantTermsAggregatorBuilder.DEFAULT_SIGNIFICANCE_HEURISTIC,\n+ BUCKETS, pipelineAggregators, metaData);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/UnmappedSignificantTerms.java", "status": "modified" }, { "diff": "@@ -141,7 +141,7 @@ public void writeTo(StreamOutput out) throws IOException {\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject();\n- builder.utf8Field(CommonFields.KEY, termBytes);\n+ builder.field(CommonFields.KEY, getKeyAsString());\n builder.field(CommonFields.DOC_COUNT, getDocCount());\n if (showDocCountError) {\n builder.field(InternalTerms.DOC_COUNT_ERROR_UPPER_BOUND_FIELD_NAME, getDocCountError());", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTerms.java", "status": "modified" }, { "diff": "@@ -131,9 +131,9 @@ InternalSignificantTerms[] getRandomSignificantTerms(SignificanceHeuristic heuri\n sTerms[1] = new SignificantLongTerms();\n } else {\n BytesRef term = new BytesRef(\"someterm\");\n- buckets.add(new SignificantStringTerms.Bucket(term, 1, 2, 3, 4, InternalAggregations.EMPTY));\n- sTerms[0] = new SignificantStringTerms(10, 20, \"some_name\", 1, 1, heuristic, buckets, Collections.emptyList(),\n- null);\n+ buckets.add(new SignificantStringTerms.Bucket(term, 1, 2, 3, 4, InternalAggregations.EMPTY, DocValueFormat.RAW));\n+ sTerms[0] = new SignificantStringTerms(10, 20, \"some_name\", DocValueFormat.RAW, 1, 1, heuristic, buckets,\n+ Collections.emptyList(), null);\n sTerms[1] = new SignificantStringTerms();\n }\n return sTerms;\n@@ -184,15 +184,15 @@ private List<InternalAggregation> createInternalAggregations() {\n \n private InternalSignificantTerms createAggregation(String type, SignificanceHeuristic significanceHeuristic, List<InternalSignificantTerms.Bucket> buckets, long subsetSize, long supersetSize) {\n if (type.equals(\"string\")) {\n- return new SignificantStringTerms(subsetSize, supersetSize, \"sig_terms\", 2, -1, significanceHeuristic, buckets, new ArrayList<PipelineAggregator>(), new HashMap<String, Object>());\n+ return new SignificantStringTerms(subsetSize, supersetSize, \"sig_terms\", DocValueFormat.RAW, 2, -1, significanceHeuristic, buckets, new ArrayList<PipelineAggregator>(), new HashMap<String, Object>());\n } else {\n return new SignificantLongTerms(subsetSize, supersetSize, \"sig_terms\", DocValueFormat.RAW, 2, -1, significanceHeuristic, buckets, new ArrayList<PipelineAggregator>(), new HashMap<String, Object>());\n }\n }\n \n private InternalSignificantTerms.Bucket createBucket(String type, long subsetDF, long subsetSize, long supersetDF, long supersetSize, long label) {\n if (type.equals(\"string\")) {\n- return new SignificantStringTerms.Bucket(new BytesRef(Long.toString(label).getBytes(StandardCharsets.UTF_8)), subsetDF, subsetSize, supersetDF, supersetSize, InternalAggregations.EMPTY);\n+ return new SignificantStringTerms.Bucket(new BytesRef(Long.toString(label).getBytes(StandardCharsets.UTF_8)), subsetDF, subsetSize, supersetDF, supersetSize, InternalAggregations.EMPTY, DocValueFormat.RAW);\n } else {\n return new SignificantLongTerms.Bucket(subsetDF, subsetSize, supersetDF, supersetSize, label, InternalAggregations.EMPTY, DocValueFormat.RAW);\n }", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/significant/SignificanceHeuristicTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,110 @@\n+setup:\n+ - do:\n+ indices.create:\n+ index: test_1\n+ body:\n+ settings:\n+ number_of_replicas: 0\n+ mappings:\n+ test:\n+ properties:\n+ str:\n+ type: keyword\n+ ip:\n+ type: ip\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+\n+---\n+\"Basic test\":\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 1\n+ body: { \"str\" : \"abc\" }\n+\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 2\n+ body: { \"str\": \"abc\" }\n+\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 3\n+ body: { \"str\": \"bcd\" }\n+ \n+ - do:\n+ indices.refresh: {}\n+ \n+ - do:\n+ search:\n+ body: { \"aggs\" : { \"str_terms\" : { \"terms\" : { \"field\" : \"str\" } } } }\n+\n+ - match: { hits.total: 3 }\n+ \n+ - length: { aggregations.str_terms.buckets: 2 }\n+ \n+ - match: { aggregations.str_terms.buckets.0.key: \"abc\" }\n+ \n+ - is_false: aggregations.str_terms.buckets.0.key_as_string\n+ \n+ - match: { aggregations.str_terms.buckets.0.doc_count: 2 }\n+ \n+ - match: { aggregations.str_terms.buckets.1.key: \"bcd\" }\n+ \n+ - is_false: aggregations.str_terms.buckets.1.key_as_string\n+ \n+ - match: { aggregations.str_terms.buckets.1.doc_count: 1 }\n+ \n+---\n+\"IP test\":\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 1\n+ body: { \"ip\": \"::1\" }\n+\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 2\n+ body: { \"ip\": \"127.0.0.1\" }\n+\n+ - do:\n+ index:\n+ index: test_1\n+ type: test\n+ id: 3\n+ body: { \"ip\": \"::1\" }\n+\n+ - do:\n+ indices.refresh: {}\n+\n+ - do:\n+ search:\n+ body: { \"aggs\" : { \"ip_terms\" : { \"terms\" : { \"field\" : \"ip\" } } } }\n+\n+ - match: { hits.total: 3 }\n+\n+ - length: { aggregations.ip_terms.buckets: 2 }\n+\n+ - match: { aggregations.ip_terms.buckets.0.key: \"::1\" }\n+\n+ - is_false: aggregations.ip_terms.buckets.0.key_as_string\n+\n+ - match: { aggregations.ip_terms.buckets.0.doc_count: 2 }\n+\n+ - match: { aggregations.ip_terms.buckets.1.key: \"127.0.0.1\" }\n+ \n+ - is_false: aggregations.ip_terms.buckets.1.key_as_string\n+ \n+ - match: { aggregations.ip_terms.buckets.1.doc_count: 1 }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/20_terms.yaml", "status": "added" } ] }
{ "body": "Since we switched to points to index ip addresses, a couple things are not working anymore:\n- [x] range queries only support inclusive bounds (#17777)\n- [x] range aggregations do not work anymore (#17859)\n- [x] sorting on ip addresses fails since it tries to write binary bytes as an utf8 string when rendering sort values (#17959)\n- [x] sorting and aggregations across old and new indices do not work since the coordinating node gets longs from some shards and binary values from other shards and does not know how to reconcile them (#18593)\n- [x] terms aggregations return binary keys (#18003)\n", "comments": [], "number": 17971, "title": "Not all features work on ip fields" }
{ "body": "The `ip` field uses a binary representation internally. This breaks when\nrendering sort values in search responses since elasticsearch tries to write a\nbinary byte[] as an utf8 json string. This commit extends the `DocValueFormat`\nAPI in order to give fields a chance to choose how to render values.\n\nCloses #6077\n\nRelates to #17971\n", "number": 17959, "review_comments": [ { "body": "Is this really the best place for this? Seems it has nothing to do with actual networking, but just with the ip mapping, so it should probably go there? And does it really need to be public or can it be an impl detail of the mapper?\n", "created_at": "2016-04-25T18:18:35Z" } ], "title": "Allow binary sort values." }
{ "commits": [ { "message": "Allow binary sort values. #17959\n\nThe `ip` field uses a binary representation internally. This breaks when\nrendering sort values in search responses since elasticsearch tries to write a\nbinary byte[] as an utf8 json string. This commit extends the `DocValueFormat`\nAPI in order to give fields a chance to choose how to render values.\n\nCloses #6077" } ], "files": [ { "diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.index.query;\n \n-import org.apache.lucene.search.Sort;\n import org.elasticsearch.action.support.ToXContentToBytes;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParsingException;\n@@ -41,6 +40,7 @@\n import org.elasticsearch.search.fetch.source.FetchSourceContext;\n import org.elasticsearch.search.highlight.HighlightBuilder;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.sort.SortAndFormats;\n import org.elasticsearch.search.sort.SortBuilder;\n \n import java.io.IOException;\n@@ -512,7 +512,7 @@ private void setupInnerHitsContext(QueryShardContext context, InnerHitsContext.B\n innerHitsContext.fetchSourceContext(fetchSourceContext);\n }\n if (sorts != null) {\n- Optional<Sort> optionalSort = SortBuilder.buildSort(sorts, context);\n+ Optional<SortAndFormats> optionalSort = SortBuilder.buildSort(sorts, context);\n if (optionalSort.isPresent()) {\n innerHitsContext.sort(optionalSort.get());\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.search;\n \n import org.apache.lucene.document.InetAddressPoint;\n-import org.apache.lucene.index.Term;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.geo.GeoHashUtils;\n import org.elasticsearch.common.io.stream.NamedWriteable;\n@@ -29,8 +28,8 @@\n import org.elasticsearch.common.joda.DateMathParser;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.network.NetworkAddress;\n-import org.elasticsearch.index.mapper.ip.IpFieldMapper;\n import org.elasticsearch.index.mapper.ip.LegacyIpFieldMapper;\n import org.joda.time.DateTimeZone;\n \n@@ -48,16 +47,33 @@\n /** A formatter for values as returned by the fielddata/doc-values APIs. */\n public interface DocValueFormat extends NamedWriteable {\n \n+ /** Format a long value. This is used by terms and histogram aggregations\n+ * to format keys for fields that use longs as a doc value representation\n+ * such as the {@code long} and {@code date} fields. */\n String format(long value);\n \n+ /** Format a double value. This is used by terms and stats aggregations\n+ * to format keys for fields that use numbers as a doc value representation\n+ * such as the {@code long}, {@code double} or {@code date} fields. */\n String format(double value);\n \n+ /** Format a double value. This is used by terms aggregations to format\n+ * keys for fields that use binary doc value representations such as the\n+ * {@code keyword} and {@code ip} fields. */\n String format(BytesRef value);\n \n+ /** Parse a value that was formatted with {@link #format(long)} back to the\n+ * original long value. */\n long parseLong(String value, boolean roundUp, Callable<Long> now);\n \n+ /** Parse a value that was formatted with {@link #format(double)} back to\n+ * the original double value. */\n double parseDouble(String value, boolean roundUp, Callable<Long> now);\n \n+ /** Parse a value that was formatted with {@link #format(BytesRef)} back\n+ * to the original BytesRef. */\n+ BytesRef parseBytesRef(String value);\n+\n public static final DocValueFormat RAW = new DocValueFormat() {\n \n @Override\n@@ -81,7 +97,7 @@ public String format(double value) {\n \n @Override\n public String format(BytesRef value) {\n- return Term.toString(value);\n+ return value.utf8ToString();\n }\n \n @Override\n@@ -99,6 +115,10 @@ public long parseLong(String value, boolean roundUp, Callable<Long> now) {\n public double parseDouble(String value, boolean roundUp, Callable<Long> now) {\n return Double.parseDouble(value);\n }\n+\n+ public BytesRef parseBytesRef(String value) {\n+ return new BytesRef(value);\n+ }\n };\n \n public static final class DateTime implements DocValueFormat {\n@@ -154,6 +174,11 @@ public long parseLong(String value, boolean roundUp, Callable<Long> now) {\n public double parseDouble(String value, boolean roundUp, Callable<Long> now) {\n return parseLong(value, roundUp, now);\n }\n+\n+ @Override\n+ public BytesRef parseBytesRef(String value) {\n+ throw new UnsupportedOperationException();\n+ }\n }\n \n public static final DocValueFormat GEOHASH = new DocValueFormat() {\n@@ -191,6 +216,11 @@ public long parseLong(String value, boolean roundUp, Callable<Long> now) {\n public double parseDouble(String value, boolean roundUp, Callable<Long> now) {\n throw new UnsupportedOperationException();\n }\n+\n+ @Override\n+ public BytesRef parseBytesRef(String value) {\n+ throw new UnsupportedOperationException();\n+ }\n };\n \n public static final DocValueFormat BOOLEAN = new DocValueFormat() {\n@@ -221,13 +251,24 @@ public String format(BytesRef value) {\n \n @Override\n public long parseLong(String value, boolean roundUp, Callable<Long> now) {\n- throw new UnsupportedOperationException();\n+ switch (value) {\n+ case \"false\":\n+ return 0;\n+ case \"true\":\n+ return 1;\n+ }\n+ throw new IllegalArgumentException(\"Cannot parse boolean [\" + value + \"], expected either [true] or [false]\");\n }\n \n @Override\n public double parseDouble(String value, boolean roundUp, Callable<Long> now) {\n throw new UnsupportedOperationException();\n }\n+\n+ @Override\n+ public BytesRef parseBytesRef(String value) {\n+ throw new UnsupportedOperationException();\n+ }\n };\n \n public static final DocValueFormat IP = new DocValueFormat() {\n@@ -268,6 +309,11 @@ public long parseLong(String value, boolean roundUp, Callable<Long> now) {\n public double parseDouble(String value, boolean roundUp, Callable<Long> now) {\n return parseLong(value, roundUp, now);\n }\n+\n+ @Override\n+ public BytesRef parseBytesRef(String value) {\n+ return new BytesRef(InetAddressPoint.encode(InetAddresses.forString(value)));\n+ }\n };\n \n public static final class Decimal implements DocValueFormat {\n@@ -344,5 +390,9 @@ public double parseDouble(String value, boolean roundUp, Callable<Long> now) {\n return n.doubleValue();\n }\n \n+ @Override\n+ public BytesRef parseBytesRef(String value) {\n+ throw new UnsupportedOperationException();\n+ }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/search/DocValueFormat.java", "status": "modified" }, { "diff": "@@ -104,6 +104,7 @@\n import org.elasticsearch.search.query.ScrollQuerySearchResult;\n import org.elasticsearch.search.rescore.RescoreBuilder;\n import org.elasticsearch.search.searchafter.SearchAfterBuilder;\n+import org.elasticsearch.search.sort.SortAndFormats;\n import org.elasticsearch.search.sort.SortBuilder;\n import org.elasticsearch.search.suggest.Suggesters;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -698,7 +699,7 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc\n }\n if (source.sorts() != null) {\n try {\n- Optional<Sort> optionalSort = SortBuilder.buildSort(source.sorts(), context.getQueryShardContext());\n+ Optional<SortAndFormats> optionalSort = SortBuilder.buildSort(source.sorts(), context.getQueryShardContext());\n if (optionalSort.isPresent()) {\n context.sort(optionalSort.get());\n }", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -24,7 +24,6 @@\n import org.apache.lucene.search.LeafCollector;\n import org.apache.lucene.search.ScoreDoc;\n import org.apache.lucene.search.Scorer;\n-import org.apache.lucene.search.Sort;\n import org.apache.lucene.search.TopDocs;\n import org.apache.lucene.search.TopDocsCollector;\n import org.apache.lucene.search.TopFieldCollector;\n@@ -45,6 +44,7 @@\n import org.elasticsearch.search.internal.InternalSearchHit;\n import org.elasticsearch.search.internal.InternalSearchHits;\n import org.elasticsearch.search.internal.SubSearchContext;\n+import org.elasticsearch.search.sort.SortAndFormats;\n \n import java.io.IOException;\n import java.util.List;\n@@ -78,9 +78,9 @@ public TopHitsAggregator(FetchPhase fetchPhase, SubSearchContext subSearchContex\n \n @Override\n public boolean needsScores() {\n- Sort sort = subSearchContext.sort();\n+ SortAndFormats sort = subSearchContext.sort();\n if (sort != null) {\n- return sort.needsScores() || subSearchContext.trackScores();\n+ return sort.sort.needsScores() || subSearchContext.trackScores();\n } else {\n // sort by score\n return true;\n@@ -112,12 +112,12 @@ public void setScorer(Scorer scorer) throws IOException {\n public void collect(int docId, long bucket) throws IOException {\n TopDocsAndLeafCollector collectors = topDocsCollectors.get(bucket);\n if (collectors == null) {\n- Sort sort = subSearchContext.sort();\n+ SortAndFormats sort = subSearchContext.sort();\n int topN = subSearchContext.from() + subSearchContext.size();\n // In the QueryPhase we don't need this protection, because it is build into the IndexSearcher,\n // but here we create collectors ourselves and we need prevent OOM because of crazy an offset and size.\n topN = Math.min(topN, subSearchContext.searcher().getIndexReader().maxDoc());\n- TopDocsCollector<?> topLevelCollector = sort != null ? TopFieldCollector.create(sort, topN, true, subSearchContext.trackScores(), subSearchContext.trackScores()) : TopScoreDocCollector.create(topN);\n+ TopDocsCollector<?> topLevelCollector = sort != null ? TopFieldCollector.create(sort.sort, topN, true, subSearchContext.trackScores(), subSearchContext.trackScores()) : TopScoreDocCollector.create(topN);\n collectors = new TopDocsAndLeafCollector(topLevelCollector);\n collectors.leafCollector = collectors.topLevelCollector.getLeafCollector(ctx);\n collectors.leafCollector.setScorer(scorer);\n@@ -137,7 +137,7 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) {\n } else {\n final TopDocs topDocs = topDocsCollector.topLevelCollector.topDocs();\n \n- subSearchContext.queryResult().topDocs(topDocs);\n+ subSearchContext.queryResult().topDocs(topDocs, subSearchContext.sort() == null ? null : subSearchContext.sort().formats);\n int[] docIdsToLoad = new int[topDocs.scoreDocs.length];\n for (int i = 0; i < topDocs.scoreDocs.length; i++) {\n docIdsToLoad[i] = topDocs.scoreDocs[i].doc;\n@@ -153,7 +153,7 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) {\n searchHitFields.score(scoreDoc.score);\n if (scoreDoc instanceof FieldDoc) {\n FieldDoc fieldDoc = (FieldDoc) scoreDoc;\n- searchHitFields.sortValues(fieldDoc.fields);\n+ searchHitFields.sortValues(fieldDoc.fields, subSearchContext.sort().formats);\n }\n }\n topHits = new InternalTopHits(name, subSearchContext.from(), subSearchContext.size(), topDocs, fetchResult.hits(), pipelineAggregators(),\n@@ -166,7 +166,7 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) {\n public InternalTopHits buildEmptyAggregation() {\n TopDocs topDocs;\n if (subSearchContext.sort() != null) {\n- topDocs = new TopFieldDocs(0, new FieldDoc[0], subSearchContext.sort().getSort(), Float.NaN);\n+ topDocs = new TopFieldDocs(0, new FieldDoc[0], subSearchContext.sort().sort.getSort(), Float.NaN);\n } else {\n topDocs = Lucene.EMPTY_TOP_DOCS;\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search.aggregations.metrics.tophits;\n \n-import org.apache.lucene.search.Sort;\n import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.SearchScript;\n import org.elasticsearch.search.aggregations.Aggregator;\n@@ -35,6 +34,7 @@\n import org.elasticsearch.search.fetch.source.FetchSourceContext;\n import org.elasticsearch.search.highlight.HighlightBuilder;\n import org.elasticsearch.search.internal.SubSearchContext;\n+import org.elasticsearch.search.sort.SortAndFormats;\n import org.elasticsearch.search.sort.SortBuilder;\n \n import java.io.IOException;\n@@ -87,7 +87,7 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu\n subSearchContext.from(from);\n subSearchContext.size(size);\n if (sorts != null) {\n- Optional<Sort> optionalSort = SortBuilder.buildSort(sorts, subSearchContext.getQueryShardContext());\n+ Optional<SortAndFormats> optionalSort = SortBuilder.buildSort(sorts, subSearchContext.getQueryShardContext());\n if (optionalSort.isPresent()) {\n subSearchContext.sort(optionalSort.get());\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java", "status": "modified" }, { "diff": "@@ -362,7 +362,7 @@ public InternalSearchResponse merge(ScoreDoc[] sortedDocs, AtomicArray<? extends\n \n if (sorted) {\n FieldDoc fieldDoc = (FieldDoc) shardDoc;\n- searchHit.sortValues(fieldDoc.fields);\n+ searchHit.sortValues(fieldDoc.fields, firstResult.sortValueFormats());\n if (sortScoreIndex != -1) {\n searchHit.score(((Number) fieldDoc.fields[sortScoreIndex]).floatValue());\n }", "filename": "core/src/main/java/org/elasticsearch/search/controller/SearchPhaseController.java", "status": "modified" }, { "diff": "@@ -142,7 +142,7 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n TopDocsCollector topDocsCollector;\n if (sort() != null) {\n try {\n- topDocsCollector = TopFieldCollector.create(sort(), topN, true, trackScores(), trackScores());\n+ topDocsCollector = TopFieldCollector.create(sort().sort, topN, true, trackScores(), trackScores());\n } catch (IOException e) {\n throw ExceptionsHelper.convertToElastic(e);\n }\n@@ -317,7 +317,7 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n int topN = Math.min(from() + size(), context.searcher().getIndexReader().maxDoc());\n TopDocsCollector topDocsCollector;\n if (sort() != null) {\n- topDocsCollector = TopFieldCollector.create(sort(), topN, true, trackScores(), trackScores());\n+ topDocsCollector = TopFieldCollector.create(sort().sort, topN, true, trackScores(), trackScores());\n } else {\n topDocsCollector = TopScoreDocCollector.create(topN);\n }", "filename": "core/src/main/java/org/elasticsearch/search/fetch/innerhits/InnerHitsContext.java", "status": "modified" }, { "diff": "@@ -73,7 +73,7 @@ public void hitExecute(SearchContext context, HitContext hitContext) {\n } catch (IOException e) {\n throw ExceptionsHelper.convertToElastic(e);\n }\n- innerHits.queryResult().topDocs(topDocs);\n+ innerHits.queryResult().topDocs(topDocs, innerHits.sort() == null ? null : innerHits.sort().formats);\n int[] docIdsToLoad = new int[topDocs.scoreDocs.length];\n for (int i = 0; i < topDocs.scoreDocs.length; i++) {\n docIdsToLoad[i] = topDocs.scoreDocs[i].doc;\n@@ -89,7 +89,7 @@ public void hitExecute(SearchContext context, HitContext hitContext) {\n searchHitFields.score(scoreDoc.score);\n if (scoreDoc instanceof FieldDoc) {\n FieldDoc fieldDoc = (FieldDoc) scoreDoc;\n- searchHitFields.sortValues(fieldDoc.fields);\n+ searchHitFields.sortValues(fieldDoc.fields, innerHits.sort().formats);\n }\n }\n results.put(entry.getKey(), fetchResult.hits());", "filename": "core/src/main/java/org/elasticsearch/search/fetch/innerhits/InnerHitsFetchSubPhase.java", "status": "modified" }, { "diff": "@@ -25,7 +25,6 @@\n import org.apache.lucene.search.Collector;\n import org.apache.lucene.search.ConstantScoreQuery;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.Sort;\n import org.apache.lucene.search.FieldDoc;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.Counter;\n@@ -71,6 +70,7 @@\n import org.elasticsearch.search.query.QueryPhaseExecutionException;\n import org.elasticsearch.search.query.QuerySearchResult;\n import org.elasticsearch.search.rescore.RescoreSearchContext;\n+import org.elasticsearch.search.sort.SortAndFormats;\n import org.elasticsearch.search.suggest.SuggestionSearchContext;\n \n import java.io.IOException;\n@@ -114,7 +114,7 @@ public class DefaultSearchContext extends SearchContext {\n private FetchSourceContext fetchSourceContext;\n private int from = -1;\n private int size = -1;\n- private Sort sort;\n+ private SortAndFormats sort;\n private Float minimumScore;\n private boolean trackScores = false; // when sorting, track scores as well...\n private FieldDoc searchAfter;\n@@ -532,13 +532,13 @@ public Float minimumScore() {\n }\n \n @Override\n- public SearchContext sort(Sort sort) {\n+ public SearchContext sort(SortAndFormats sort) {\n this.sort = sort;\n return this;\n }\n \n @Override\n- public Sort sort() {\n+ public SortAndFormats sort() {\n return this.sort;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java", "status": "modified" }, { "diff": "@@ -22,7 +22,6 @@\n import org.apache.lucene.search.Collector;\n import org.apache.lucene.search.FieldDoc;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.Sort;\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n@@ -55,6 +54,7 @@\n import org.elasticsearch.search.profile.Profilers;\n import org.elasticsearch.search.query.QuerySearchResult;\n import org.elasticsearch.search.rescore.RescoreSearchContext;\n+import org.elasticsearch.search.sort.SortAndFormats;\n import org.elasticsearch.search.suggest.SuggestionSearchContext;\n \n import java.util.List;\n@@ -306,12 +306,12 @@ public Float minimumScore() {\n }\n \n @Override\n- public SearchContext sort(Sort sort) {\n+ public SearchContext sort(SortAndFormats sort) {\n return in.sort(sort);\n }\n \n @Override\n- public Sort sort() {\n+ public SortAndFormats sort() {\n return in.sort();\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java", "status": "modified" }, { "diff": "@@ -24,7 +24,6 @@\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.compress.CompressorFactory;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -34,6 +33,7 @@\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.SearchHitField;\n import org.elasticsearch.search.SearchHits;\n@@ -44,6 +44,7 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.HashMap;\n import java.util.Iterator;\n import java.util.List;\n@@ -326,21 +327,13 @@ public void highlightFields(Map<String, HighlightField> highlightFields) {\n this.highlightFields = highlightFields;\n }\n \n- public void sortValues(Object[] sortValues) {\n- // LUCENE 4 UPGRADE: There must be a better way\n- // we want to convert to a Text object here, and not BytesRef\n-\n- // Don't write into sortValues! Otherwise the fields in FieldDoc is modified, which may be used in other places. (SearchContext#lastEmitedDoc)\n- Object[] sortValuesCopy = new Object[sortValues.length];\n- System.arraycopy(sortValues, 0, sortValuesCopy, 0, sortValues.length);\n- if (sortValues != null) {\n- for (int i = 0; i < sortValues.length; i++) {\n- if (sortValues[i] instanceof BytesRef) {\n- sortValuesCopy[i] = new Text(new BytesArray((BytesRef) sortValues[i]));\n- }\n+ public void sortValues(Object[] sortValues, DocValueFormat[] sortValueFormats) {\n+ this.sortValues = Arrays.copyOf(sortValues, sortValues.length);\n+ for (int i = 0; i < sortValues.length; ++i) {\n+ if (this.sortValues[i] instanceof BytesRef) {\n+ this.sortValues[i] = sortValueFormats[i].format((BytesRef) sortValues[i]);\n }\n }\n- this.sortValues = sortValuesCopy;\n }\n \n @Override\n@@ -618,8 +611,6 @@ public void readFrom(StreamInput in, InternalSearchHits.StreamContext context) t\n sortValues[i] = in.readShort();\n } else if (type == 8) {\n sortValues[i] = in.readBoolean();\n- } else if (type == 9) {\n- sortValues[i] = in.readText();\n } else {\n throw new IOException(\"Can't match type [\" + type + \"]\");\n }\n@@ -726,9 +717,6 @@ public void writeTo(StreamOutput out, InternalSearchHits.StreamContext context)\n } else if (type == Boolean.class) {\n out.writeByte((byte) 8);\n out.writeBoolean((Boolean) sortValue);\n- } else if (sortValue instanceof Text) {\n- out.writeByte((byte) 9);\n- out.writeText((Text) sortValue);\n } else {\n throw new IOException(\"Can't handle sort field value of type [\" + type + \"]\");\n }", "filename": "core/src/main/java/org/elasticsearch/search/internal/InternalSearchHit.java", "status": "modified" }, { "diff": "@@ -59,6 +59,7 @@\n import org.elasticsearch.search.profile.Profilers;\n import org.elasticsearch.search.query.QuerySearchResult;\n import org.elasticsearch.search.rescore.RescoreSearchContext;\n+import org.elasticsearch.search.sort.SortAndFormats;\n import org.elasticsearch.search.suggest.SuggestionSearchContext;\n \n import java.util.ArrayList;\n@@ -244,9 +245,9 @@ public InnerHitsContext innerHits() {\n \n public abstract Float minimumScore();\n \n- public abstract SearchContext sort(Sort sort);\n+ public abstract SearchContext sort(SortAndFormats sort);\n \n- public abstract Sort sort();\n+ public abstract SortAndFormats sort();\n \n public abstract SearchContext trackScores(boolean trackScores);\n ", "filename": "core/src/main/java/org/elasticsearch/search/internal/SearchContext.java", "status": "modified" }, { "diff": "@@ -19,19 +19,18 @@\n package org.elasticsearch.search.internal;\n \n import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.Sort;\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.search.aggregations.SearchContextAggregations;\n import org.elasticsearch.search.fetch.FetchSearchResult;\n-import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n import org.elasticsearch.search.fetch.script.ScriptFieldsContext;\n import org.elasticsearch.search.fetch.source.FetchSourceContext;\n import org.elasticsearch.search.highlight.SearchContextHighlight;\n import org.elasticsearch.search.lookup.SearchLookup;\n import org.elasticsearch.search.query.QuerySearchResult;\n import org.elasticsearch.search.rescore.RescoreSearchContext;\n+import org.elasticsearch.search.sort.SortAndFormats;\n import org.elasticsearch.search.suggest.SuggestionSearchContext;\n \n import java.util.ArrayList;\n@@ -48,7 +47,7 @@ public class SubSearchContext extends FilteredSearchContext {\n \n private int from;\n private int size = DEFAULT_SIZE;\n- private Sort sort;\n+ private SortAndFormats sort;\n private ParsedQuery parsedQuery;\n private Query query;\n \n@@ -172,13 +171,13 @@ public SearchContext minimumScore(float minimumScore) {\n }\n \n @Override\n- public SearchContext sort(Sort sort) {\n+ public SearchContext sort(SortAndFormats sort) {\n this.sort = sort;\n return this;\n }\n \n @Override\n- public Sort sort() {\n+ public SortAndFormats sort() {\n return sort;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java", "status": "modified" }, { "diff": "@@ -46,6 +46,7 @@\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.MinimumScoreCollector;\n import org.elasticsearch.common.lucene.search.FilteredCollector;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.SearchPhase;\n import org.elasticsearch.search.SearchService;\n@@ -58,6 +59,7 @@\n import org.elasticsearch.search.profile.Profiler;\n import org.elasticsearch.search.rescore.RescorePhase;\n import org.elasticsearch.search.rescore.RescoreSearchContext;\n+import org.elasticsearch.search.sort.SortAndFormats;\n import org.elasticsearch.search.sort.TrackScoresParseElement;\n import org.elasticsearch.search.suggest.SuggestPhase;\n \n@@ -119,7 +121,9 @@ public void execute(SearchContext searchContext) throws QueryPhaseExecutionExcep\n if (searchContext.hasOnlySuggest()) {\n suggestPhase.execute(searchContext);\n // TODO: fix this once we can fetch docs for suggestions\n- searchContext.queryResult().topDocs(new TopDocs(0, Lucene.EMPTY_SCORE_DOCS, 0));\n+ searchContext.queryResult().topDocs(\n+ new TopDocs(0, Lucene.EMPTY_SCORE_DOCS, 0),\n+ new DocValueFormat[0]);\n return;\n }\n // Pre-process aggregations as late as possible. In the case of a DFS_Q_T_F\n@@ -141,15 +145,15 @@ public void execute(SearchContext searchContext) throws QueryPhaseExecutionExcep\n }\n }\n \n- private static boolean returnsDocsInOrder(Query query, Sort sort) {\n- if (sort == null || Sort.RELEVANCE.equals(sort)) {\n+ private static boolean returnsDocsInOrder(Query query, SortAndFormats sf) {\n+ if (sf == null || Sort.RELEVANCE.equals(sf.sort)) {\n // sort by score\n // queries that return constant scores will return docs in index\n // order since Lucene tie-breaks on the doc id\n return query.getClass() == ConstantScoreQuery.class\n || query.getClass() == MatchAllDocsQuery.class;\n } else {\n- return Sort.INDEXORDER.equals(sort);\n+ return Sort.INDEXORDER.equals(sf.sort);\n }\n }\n \n@@ -176,6 +180,7 @@ static boolean execute(SearchContext searchContext, final IndexSearcher searcher\n \n Collector collector;\n Callable<TopDocs> topDocsCallable;\n+ DocValueFormat[] sortValueFormats = new DocValueFormat[0];\n \n assert query == searcher.rewrite(query); // already rewritten\n \n@@ -229,8 +234,10 @@ public TopDocs call() throws Exception {\n }\n assert numDocs > 0;\n if (searchContext.sort() != null) {\n- topDocsCollector = TopFieldCollector.create(searchContext.sort(), numDocs,\n+ SortAndFormats sf = searchContext.sort();\n+ topDocsCollector = TopFieldCollector.create(sf.sort, numDocs,\n (FieldDoc) after, true, searchContext.trackScores(), searchContext.trackScores());\n+ sortValueFormats = sf.formats;\n } else {\n rescore = !searchContext.rescore().isEmpty();\n for (RescoreSearchContext rescoreContext : searchContext.rescore()) {\n@@ -402,7 +409,7 @@ public TopDocs call() throws Exception {\n queryResult.terminatedEarly(false);\n }\n \n- queryResult.topDocs(topDocsCallable.call());\n+ queryResult.topDocs(topDocsCallable.call(), sortValueFormats);\n \n if (searchContext.getProfilers() != null) {\n List<ProfileShardResult> shardResults = Profiler.buildShardResults(searchContext.getProfilers().getProfilers());", "filename": "core/src/main/java/org/elasticsearch/search/query/QueryPhase.java", "status": "modified" }, { "diff": "@@ -19,12 +19,14 @@\n \n package org.elasticsearch.search.query;\n \n+import org.apache.lucene.search.FieldDoc;\n import org.apache.lucene.search.TopDocs;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.search.aggregations.Aggregations;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n@@ -51,6 +53,7 @@ public class QuerySearchResult extends QuerySearchResultProvider {\n private int from;\n private int size;\n private TopDocs topDocs;\n+ private DocValueFormat[] sortValueFormats;\n private InternalAggregations aggregations;\n private List<SiblingPipelineAggregator> pipelineAggregators;\n private Suggest suggest;\n@@ -112,8 +115,20 @@ public TopDocs topDocs() {\n return topDocs;\n }\n \n- public void topDocs(TopDocs topDocs) {\n+ public void topDocs(TopDocs topDocs, DocValueFormat[] sortValueFormats) {\n this.topDocs = topDocs;\n+ if (topDocs.scoreDocs.length > 0 && topDocs.scoreDocs[0] instanceof FieldDoc) {\n+ int numFields = ((FieldDoc) topDocs.scoreDocs[0]).fields.length;\n+ if (numFields != sortValueFormats.length) {\n+ throw new IllegalArgumentException(\"The number of sort fields does not match: \"\n+ + numFields + \" != \" + sortValueFormats.length);\n+ }\n+ }\n+ this.sortValueFormats = sortValueFormats;\n+ }\n+\n+ public DocValueFormat[] sortValueFormats() {\n+ return sortValueFormats;\n }\n \n public Aggregations aggregations() {\n@@ -192,6 +207,15 @@ public void readFromWithId(long id, StreamInput in) throws IOException {\n // shardTarget = readSearchShardTarget(in);\n from = in.readVInt();\n size = in.readVInt();\n+ int numSortFieldsPlus1 = in.readVInt();\n+ if (numSortFieldsPlus1 == 0) {\n+ sortValueFormats = null;\n+ } else {\n+ sortValueFormats = new DocValueFormat[numSortFieldsPlus1 - 1];\n+ for (int i = 0; i < sortValueFormats.length; ++i) {\n+ sortValueFormats[i] = in.readNamedWriteable(DocValueFormat.class);\n+ }\n+ }\n topDocs = readTopDocs(in);\n if (in.readBoolean()) {\n aggregations = InternalAggregations.readAggregations(in);\n@@ -233,6 +257,14 @@ public void writeToNoId(StreamOutput out) throws IOException {\n // shardTarget.writeTo(out);\n out.writeVInt(from);\n out.writeVInt(size);\n+ if (sortValueFormats == null) {\n+ out.writeVInt(0);\n+ } else {\n+ out.writeVInt(1 + sortValueFormats.length);\n+ for (int i = 0; i < sortValueFormats.length; ++i) {\n+ out.writeNamedWriteable(sortValueFormats[i]);\n+ }\n+ }\n writeTopDocs(out, topDocs);\n if (aggregations == null) {\n out.writeBoolean(false);", "filename": "core/src/main/java/org/elasticsearch/search/query/QuerySearchResult.java", "status": "modified" }, { "diff": "@@ -61,7 +61,7 @@ public void execute(SearchContext context) {\n for (RescoreSearchContext ctx : context.rescore()) {\n topDocs = ctx.rescorer().rescore(topDocs, context, ctx);\n }\n- context.queryResult().topDocs(topDocs);\n+ context.queryResult().topDocs(topDocs, context.queryResult().sortValueFormats());\n } catch (IOException e) {\n throw new ElasticsearchException(\"Rescore Phase Failed\", e);\n }", "filename": "core/src/main/java/org/elasticsearch/search/rescore/RescorePhase.java", "status": "modified" }, { "diff": "@@ -20,9 +20,7 @@\n package org.elasticsearch.search.searchafter;\n \n import org.apache.lucene.search.FieldDoc;\n-import org.apache.lucene.search.Sort;\n import org.apache.lucene.search.SortField;\n-import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParseFieldMatcher;\n@@ -36,6 +34,8 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n+import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.sort.SortAndFormats;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -104,21 +104,23 @@ public Object[] getSortValues() {\n return Arrays.copyOf(sortValues, sortValues.length);\n }\n \n- public static FieldDoc buildFieldDoc(Sort sort, Object[] values) {\n- if (sort == null || sort.getSort() == null || sort.getSort().length == 0) {\n+ public static FieldDoc buildFieldDoc(SortAndFormats sort, Object[] values) {\n+ if (sort == null || sort.sort.getSort() == null || sort.sort.getSort().length == 0) {\n throw new IllegalArgumentException(\"Sort must contain at least one field.\");\n }\n \n- SortField[] sortFields = sort.getSort();\n+ SortField[] sortFields = sort.sort.getSort();\n if (sortFields.length != values.length) {\n throw new IllegalArgumentException(\n- SEARCH_AFTER.getPreferredName() + \" has \" + values.length + \" value(s) but sort has \" + sort.getSort().length + \".\");\n+ SEARCH_AFTER.getPreferredName() + \" has \" + values.length + \" value(s) but sort has \"\n+ + sort.sort.getSort().length + \".\");\n }\n Object[] fieldValues = new Object[sortFields.length];\n for (int i = 0; i < sortFields.length; i++) {\n SortField sortField = sortFields[i];\n+ DocValueFormat format = sort.formats[i];\n if (values[i] != null) {\n- fieldValues[i] = convertValueFromSortField(values[i], sortField);\n+ fieldValues[i] = convertValueFromSortField(values[i], sortField, format);\n } else {\n fieldValues[i] = null;\n }\n@@ -130,15 +132,15 @@ public static FieldDoc buildFieldDoc(Sort sort, Object[] values) {\n return new FieldDoc(Integer.MAX_VALUE, 0, fieldValues);\n }\n \n- private static Object convertValueFromSortField(Object value, SortField sortField) {\n+ private static Object convertValueFromSortField(Object value, SortField sortField, DocValueFormat format) {\n if (sortField.getComparatorSource() instanceof IndexFieldData.XFieldComparatorSource) {\n IndexFieldData.XFieldComparatorSource cmpSource = (IndexFieldData.XFieldComparatorSource) sortField.getComparatorSource();\n- return convertValueFromSortType(sortField.getField(), cmpSource.reducedType(), value);\n+ return convertValueFromSortType(sortField.getField(), cmpSource.reducedType(), value, format);\n }\n- return convertValueFromSortType(sortField.getField(), sortField.getType(), value);\n+ return convertValueFromSortType(sortField.getField(), sortField.getType(), value, format);\n }\n \n- private static Object convertValueFromSortType(String fieldName, SortField.Type sortType, Object value) {\n+ private static Object convertValueFromSortType(String fieldName, SortField.Type sortType, Object value, DocValueFormat format) {\n try {\n switch (sortType) {\n case DOC:\n@@ -179,7 +181,7 @@ private static Object convertValueFromSortType(String fieldName, SortField.Type\n \n case STRING_VAL:\n case STRING:\n- return new BytesRef(value.toString());\n+ return format.parseBytesRef(value.toString());\n \n default:\n throw new IllegalArgumentException(\"Comparator type [\" + sortType.name() + \"] for field [\" + fieldName", "filename": "core/src/main/java/org/elasticsearch/search/searchafter/SearchAfterBuilder.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.QueryShardException;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.MultiValueMode;\n \n import java.io.IOException;\n@@ -55,8 +56,10 @@ public class FieldSortBuilder extends SortBuilder<FieldSortBuilder> {\n * special field name to sort by index order\n */\n public static final String DOC_FIELD_NAME = \"_doc\";\n- private static final SortField SORT_DOC = new SortField(null, SortField.Type.DOC);\n- private static final SortField SORT_DOC_REVERSE = new SortField(null, SortField.Type.DOC, true);\n+ private static final SortFieldAndFormat SORT_DOC = new SortFieldAndFormat(\n+ new SortField(null, SortField.Type.DOC), DocValueFormat.RAW);\n+ private static final SortFieldAndFormat SORT_DOC_REVERSE = new SortFieldAndFormat(\n+ new SortField(null, SortField.Type.DOC, true), DocValueFormat.RAW);\n \n private final String fieldName;\n \n@@ -246,7 +249,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n \n @Override\n- public SortField build(QueryShardContext context) throws IOException {\n+ public SortFieldAndFormat build(QueryShardContext context) throws IOException {\n if (DOC_FIELD_NAME.equals(fieldName)) {\n if (order == SortOrder.DESC) {\n return SORT_DOC_REVERSE;\n@@ -281,7 +284,8 @@ public SortField build(QueryShardContext context) throws IOException {\n }\n IndexFieldData.XFieldComparatorSource fieldComparatorSource = fieldData\n .comparatorSource(missing, localSortMode, nested);\n- return new SortField(fieldType.name(), fieldComparatorSource, reverse);\n+ SortField field = new SortField(fieldType.name(), fieldComparatorSource, reverse);\n+ return new SortFieldAndFormat(field, fieldType.docValueFormat(null, null));\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java", "status": "modified" }, { "diff": "@@ -51,6 +51,7 @@\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.MultiValueMode;\n \n import java.io.IOException;\n@@ -504,7 +505,7 @@ public static GeoDistanceSortBuilder fromXContent(QueryParseContext context, Str\n }\n \n @Override\n- public SortField build(QueryShardContext context) throws IOException {\n+ public SortFieldAndFormat build(QueryShardContext context) throws IOException {\n final boolean indexCreatedBeforeV2_0 = context.indexVersionCreated().before(Version.V_2_0_0);\n // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes\n List<GeoPoint> localPoints = new ArrayList<GeoPoint>();\n@@ -585,7 +586,7 @@ protected NumericDocValues getNumericDocValues(LeafReaderContext context, String\n \n };\n \n- return new SortField(fieldName, geoDistanceComparatorSource, reverse);\n+ return new SortFieldAndFormat(new SortField(fieldName, geoDistanceComparatorSource, reverse), DocValueFormat.RAW);\n }\n \n static void parseGeoPoints(XContentParser parser, List<GeoPoint> geoPoints) throws IOException {", "filename": "core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.search.DocValueFormat;\n \n import java.io.IOException;\n import java.util.Objects;\n@@ -40,8 +41,10 @@ public class ScoreSortBuilder extends SortBuilder<ScoreSortBuilder> {\n \n public static final String NAME = \"_score\";\n public static final ParseField ORDER_FIELD = new ParseField(\"order\");\n- private static final SortField SORT_SCORE = new SortField(null, SortField.Type.SCORE);\n- private static final SortField SORT_SCORE_REVERSE = new SortField(null, SortField.Type.SCORE, true);\n+ private static final SortFieldAndFormat SORT_SCORE = new SortFieldAndFormat(\n+ new SortField(null, SortField.Type.SCORE), DocValueFormat.RAW);\n+ private static final SortFieldAndFormat SORT_SCORE_REVERSE = new SortFieldAndFormat(\n+ new SortField(null, SortField.Type.SCORE, true), DocValueFormat.RAW);\n \n /**\n * Build a ScoreSortBuilder default to descending sort order.\n@@ -106,7 +109,7 @@ public static ScoreSortBuilder fromXContent(QueryParseContext context, String fi\n }\n \n @Override\n- public SortField build(QueryShardContext context) {\n+ public SortFieldAndFormat build(QueryShardContext context) {\n if (order == SortOrder.DESC) {\n return SORT_SCORE;\n } else {", "filename": "core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java", "status": "modified" }, { "diff": "@@ -52,6 +52,7 @@\n import org.elasticsearch.script.ScriptParameterParser;\n import org.elasticsearch.script.ScriptParameterParser.ScriptParameterValue;\n import org.elasticsearch.script.SearchScript;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.MultiValueMode;\n \n import java.io.IOException;\n@@ -302,7 +303,7 @@ public static ScriptSortBuilder fromXContent(QueryParseContext context, String e\n \n \n @Override\n- public SortField build(QueryShardContext context) throws IOException {\n+ public SortFieldAndFormat build(QueryShardContext context) throws IOException {\n final SearchScript searchScript = context.getScriptService().search(\n context.lookup(), script, ScriptContext.Standard.SEARCH, Collections.emptyMap(), context.getClusterState());\n \n@@ -366,7 +367,7 @@ protected void setScorer(Scorer scorer) {\n throw new QueryShardException(context, \"custom script sort type [\" + type + \"] not supported\");\n }\n \n- return new SortField(\"_script\", fieldComparatorSource, reverse);\n+ return new SortFieldAndFormat(new SortField(\"_script\", fieldComparatorSource, reverse), DocValueFormat.RAW);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java", "status": "modified" }, { "diff": "@@ -0,0 +1,38 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.search.sort;\n+\n+import org.apache.lucene.search.Sort;\n+import org.elasticsearch.search.DocValueFormat;\n+\n+public final class SortAndFormats {\n+\n+ public final Sort sort;\n+ public final DocValueFormat[] formats;\n+\n+ public SortAndFormats(Sort sort, DocValueFormat[] formats) {\n+ if (sort.getSort().length != formats.length) {\n+ throw new IllegalArgumentException(\"Number of sort field mismatch: \"\n+ + sort.getSort().length + \" != \" + formats.length);\n+ }\n+ this.sort = sort;\n+ this.formats = formats;\n+ }\n+\n+}", "filename": "core/src/main/java/org/elasticsearch/search/sort/SortAndFormats.java", "status": "added" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.QueryShardException;\n+import org.elasticsearch.search.DocValueFormat;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -65,9 +66,9 @@ public abstract class SortBuilder<T extends SortBuilder<T>> extends ToXContentTo\n }\n \n /**\n- * Create a @link {@link SortField} from this builder.\n+ * Create a @link {@link SortFieldAndFormat} from this builder.\n */\n- protected abstract SortField build(QueryShardContext context) throws IOException;\n+ protected abstract SortFieldAndFormat build(QueryShardContext context) throws IOException;\n \n /**\n * Set the order of sorting.\n@@ -143,10 +144,13 @@ private static void parseCompoundSortField(QueryParseContext context, List<SortB\n }\n }\n \n- public static Optional<Sort> buildSort(List<SortBuilder<?>> sortBuilders, QueryShardContext context) throws IOException {\n+ public static Optional<SortAndFormats> buildSort(List<SortBuilder<?>> sortBuilders, QueryShardContext context) throws IOException {\n List<SortField> sortFields = new ArrayList<>(sortBuilders.size());\n+ List<DocValueFormat> sortFormats = new ArrayList<>(sortBuilders.size());\n for (SortBuilder<?> builder : sortBuilders) {\n- sortFields.add(builder.build(context));\n+ SortFieldAndFormat sf = builder.build(context);\n+ sortFields.add(sf.field);\n+ sortFormats.add(sf.format);\n }\n if (!sortFields.isEmpty()) {\n // optimize if we just sort on score non reversed, we don't really\n@@ -163,7 +167,9 @@ public static Optional<Sort> buildSort(List<SortBuilder<?>> sortBuilders, QueryS\n }\n }\n if (sort) {\n- return Optional.of(new Sort(sortFields.toArray(new SortField[sortFields.size()])));\n+ return Optional.of(new SortAndFormats(\n+ new Sort(sortFields.toArray(new SortField[sortFields.size()])),\n+ sortFormats.toArray(new DocValueFormat[sortFormats.size()])));\n }\n }\n return Optional.empty();", "filename": "core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java", "status": "modified" }, { "diff": "@@ -0,0 +1,36 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.search.sort;\n+\n+import java.util.Objects;\n+\n+import org.apache.lucene.search.SortField;\n+import org.elasticsearch.search.DocValueFormat;\n+\n+public final class SortFieldAndFormat {\n+\n+ public final SortField field;\n+ public final DocValueFormat format;\n+\n+ public SortFieldAndFormat(SortField field, DocValueFormat format) {\n+ this.field = Objects.requireNonNull(field);\n+ this.format = Objects.requireNonNull(format);\n+ }\n+\n+}", "filename": "core/src/main/java/org/elasticsearch/search/sort/SortFieldAndFormat.java", "status": "added" }, { "diff": "@@ -100,4 +100,5 @@ private InetAddress forgeScoped(String hostname, String address, int scopeid) th\n byte bytes[] = InetAddress.getByName(address).getAddress();\n return Inet6Address.getByAddress(hostname, bytes, scopeid);\n }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/common/network/NetworkAddressTests.java", "status": "modified" }, { "diff": "@@ -140,8 +140,8 @@ protected void doAssertLuceneQuery(HasChildQueryBuilder queryBuilder, Query quer\n InnerHitsContext.BaseInnerHits innerHits =\n searchContext.innerHits().getInnerHits().get(queryBuilder.innerHit().getName());\n assertEquals(innerHits.size(), queryBuilder.innerHit().getSize());\n- assertEquals(innerHits.sort().getSort().length, 1);\n- assertEquals(innerHits.sort().getSort()[0].getField(), STRING_FIELD_NAME_2);\n+ assertEquals(innerHits.sort().sort.getSort().length, 1);\n+ assertEquals(innerHits.sort().sort.getSort()[0].getField(), STRING_FIELD_NAME_2);\n } else {\n assertThat(searchContext.innerHits().getInnerHits().size(), equalTo(0));\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/HasChildQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -121,8 +121,8 @@ protected void doAssertLuceneQuery(HasParentQueryBuilder queryBuilder, Query que\n InnerHitsContext.BaseInnerHits innerHits = searchContext.innerHits()\n .getInnerHits().get(queryBuilder.innerHit().getName());\n assertEquals(innerHits.size(), queryBuilder.innerHit().getSize());\n- assertEquals(innerHits.sort().getSort().length, 1);\n- assertEquals(innerHits.sort().getSort()[0].getField(), STRING_FIELD_NAME_2);\n+ assertEquals(innerHits.sort().sort.getSort().length, 1);\n+ assertEquals(innerHits.sort().sort.getSort()[0].getField(), STRING_FIELD_NAME_2);\n } else {\n assertThat(searchContext.innerHits().getInnerHits().size(), equalTo(0));\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/HasParentQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -105,8 +105,8 @@ protected void doAssertLuceneQuery(NestedQueryBuilder queryBuilder, Query query,\n assertTrue(searchContext.innerHits().getInnerHits().containsKey(queryBuilder.innerHit().getName()));\n InnerHitsContext.BaseInnerHits innerHits = searchContext.innerHits().getInnerHits().get(queryBuilder.innerHit().getName());\n assertEquals(innerHits.size(), queryBuilder.innerHit().getSize());\n- assertEquals(innerHits.sort().getSort().length, 1);\n- assertEquals(innerHits.sort().getSort()[0].getField(), INT_FIELD_NAME);\n+ assertEquals(innerHits.sort().sort.getSort().length, 1);\n+ assertEquals(innerHits.sort().sort.getSort()[0].getField(), INT_FIELD_NAME);\n } else {\n assertThat(searchContext.innerHits().getInnerHits().size(), equalTo(0));\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/NestedQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -19,11 +19,14 @@\n \n package org.elasticsearch.search;\n \n+import org.apache.lucene.document.InetAddressPoint;\n+import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput;\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.test.ESTestCase;\n import org.joda.time.DateTimeZone;\n \n@@ -76,4 +79,65 @@ public void testSerialization() throws Exception {\n assertSame(DocValueFormat.RAW, in.readNamedWriteable(DocValueFormat.class));\n }\n \n+ public void testRawFormat() {\n+ assertEquals(\"0\", DocValueFormat.RAW.format(0));\n+ assertEquals(\"-1\", DocValueFormat.RAW.format(-1));\n+ assertEquals(\"1\", DocValueFormat.RAW.format(1));\n+\n+ assertEquals(\"0.0\", DocValueFormat.RAW.format(0d));\n+ assertEquals(\"0.5\", DocValueFormat.RAW.format(.5d));\n+ assertEquals(\"-1.0\", DocValueFormat.RAW.format(-1d));\n+\n+ assertEquals(\"abc\", DocValueFormat.RAW.format(new BytesRef(\"abc\")));\n+ }\n+\n+ public void testBooleanFormat() {\n+ assertEquals(\"false\", DocValueFormat.BOOLEAN.format(0));\n+ assertEquals(\"true\", DocValueFormat.BOOLEAN.format(1));\n+ }\n+\n+ public void testIpFormat() {\n+ assertEquals(\"192.168.1.7\",\n+ DocValueFormat.IP.format(new BytesRef(InetAddressPoint.encode(InetAddresses.forString(\"192.168.1.7\")))));\n+ assertEquals(\"::1\",\n+ DocValueFormat.IP.format(new BytesRef(InetAddressPoint.encode(InetAddresses.forString(\"::1\")))));\n+ }\n+\n+ public void testRawParse() {\n+ assertEquals(-1L, DocValueFormat.RAW.parseLong(\"-1\", randomBoolean(), null));\n+ assertEquals(1L, DocValueFormat.RAW.parseLong(\"1\", randomBoolean(), null));\n+ // not checking exception messages as they could depend on the JVM\n+ expectThrows(IllegalArgumentException.class, () -> DocValueFormat.RAW.parseLong(\"\", randomBoolean(), null));\n+ expectThrows(IllegalArgumentException.class, () -> DocValueFormat.RAW.parseLong(\"abc\", randomBoolean(), null));\n+\n+ assertEquals(-1d, DocValueFormat.RAW.parseDouble(\"-1\", randomBoolean(), null), 0d);\n+ assertEquals(1d, DocValueFormat.RAW.parseDouble(\"1\", randomBoolean(), null), 0d);\n+ assertEquals(.5, DocValueFormat.RAW.parseDouble(\"0.5\", randomBoolean(), null), 0d);\n+ // not checking exception messages as they could depend on the JVM\n+ expectThrows(IllegalArgumentException.class, () -> DocValueFormat.RAW.parseLong(\"\", randomBoolean(), null));\n+ expectThrows(IllegalArgumentException.class, () -> DocValueFormat.RAW.parseLong(\"abc\", randomBoolean(), null));\n+\n+ assertEquals(new BytesRef(\"abc\"), DocValueFormat.RAW.parseBytesRef(\"abc\"));\n+ }\n+\n+ public void testBooleanParse() {\n+ assertEquals(0L, DocValueFormat.BOOLEAN.parseLong(\"false\", randomBoolean(), null));\n+ assertEquals(1L, DocValueFormat.BOOLEAN.parseLong(\"true\", randomBoolean(), null));\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> DocValueFormat.BOOLEAN.parseLong(\"\", randomBoolean(), null));\n+ assertEquals(\"Cannot parse boolean [], expected either [true] or [false]\", e.getMessage());\n+ e = expectThrows(IllegalArgumentException.class,\n+ () -> DocValueFormat.BOOLEAN.parseLong(\"0\", randomBoolean(), null));\n+ assertEquals(\"Cannot parse boolean [0], expected either [true] or [false]\", e.getMessage());\n+ e = expectThrows(IllegalArgumentException.class,\n+ () -> DocValueFormat.BOOLEAN.parseLong(\"False\", randomBoolean(), null));\n+ assertEquals(\"Cannot parse boolean [False], expected either [true] or [false]\", e.getMessage());\n+ }\n+\n+ public void testIPParse() {\n+ assertEquals(new BytesRef(InetAddressPoint.encode(InetAddresses.forString(\"192.168.1.7\"))),\n+ DocValueFormat.IP.parseBytesRef(\"192.168.1.7\"));\n+ assertEquals(new BytesRef(InetAddressPoint.encode(InetAddresses.forString(\"::1\"))),\n+ DocValueFormat.IP.parseBytesRef(\"::1\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/DocValueFormatTests.java", "status": "modified" }, { "diff": "@@ -25,7 +25,6 @@\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.UUIDs;\n-import org.elasticsearch.common.text.Text;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.SearchContextException;\n import org.elasticsearch.search.SearchHit;\n@@ -189,11 +188,11 @@ public void testWithSimpleTypes() throws Exception {\n values.add(randomDouble());\n break;\n case 6:\n- values.add(new Text(randomAsciiOfLengthBetween(5, 20)));\n+ values.add(randomAsciiOfLengthBetween(5, 20));\n break;\n }\n }\n- values.add(new Text(UUIDs.randomBase64UUID()));\n+ values.add(UUIDs.randomBase64UUID());\n documents.add(values);\n }\n int reqSize = randomInt(NUM_DOCS-1);\n@@ -296,7 +295,7 @@ private void createIndexMappingsFromObjectType(String indexName, String typeName\n } else if (type == Boolean.class) {\n mappings.add(\"field\" + Integer.toString(i));\n mappings.add(\"type=boolean\");\n- } else if (types.get(i) instanceof Text) {\n+ } else if (types.get(i) instanceof String) {\n mappings.add(\"field\" + Integer.toString(i));\n mappings.add(\"type=keyword\");\n } else {", "filename": "core/src/test/java/org/elasticsearch/search/searchafter/SearchAfterIT.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 2.3.3\n\n**JVM version**: 1.8.0_91\n\n**OS version**: Ubuntu 14.04\n\n**Description of the problem including expected versus actual behavior**:\n\nAt first, I'm not sure if I misread the docs and this is not expected behavior, in which case, please look at this as an issue instead. Arrays of IPs don't seem to be able to be made. Whenever one pre-defines the mapping, and tries to create one, a type mismatch happens.\n\n**Steps to reproduce**:\nI'm following the ones someone suggested I follow on [the discussion board](https://discuss.elastic.co/t/how-to-store-array-of-ip/51461/2). Credit to msimos.\n1. I created a dummy index following the steps above. My dummy index is called \"com\".\n\n```\ncurl -XPUT 'http://localhost:9200/com' -d '{\n \"mappings\": {\n \"type\": {\n \"properties\": {\n \"ips\": {\n \"type\": \"ip\"\n }\n }\n }\n }\n}'\n```\n1. I then indexed the sample ip array into the \"com\" index in the \"anytype\" type with id \"1\".\n\n```\ncurl -XPOST 'http://localhost:9200/com/anytype/1' -d '{\n \"ips\": [\"123.123.123.123\", \"10.0.0.1\"]\n}'\n```\n1. This gave me a type mismatch error:\n\n```\n{\n \"error\": {\n \"root_cause\": [{\n \"type\": \"mapper_parsing_exception\",\n \"reason\": \"failed to parse\"\n }],\n \"type\": \"mapper_parsing_exception\",\n \"reason\": \"failed to parse\",\n \"caused_by\": {\n \"type\": \"illegal_state_exception\",\n \"reason\": \"Mixing up field types: class org.elasticsearch.index.mapper.core.LongFieldMapper$LongFieldType != class org.elasticsearch.index.mapper.ip.IpFieldMapper$IpFieldType on field ips\"\n }\n },\n \"status\": 400\n}\n```\n", "comments": [ { "body": "Note, this is fixed in master\n", "created_at": "2016-06-07T13:54:15Z" }, { "body": "@clintongormley Thanks for the reply! How can I upgrade to master? I understand its not an official release but I was wondering what would be the cleanest way of upgrading to that version. I could really use not having that type mismatch. Thanks again!\n", "created_at": "2016-06-10T21:44:52Z" }, { "body": "OK I found the bug, this is unrelated to the use of arrays and is indeed fixed in master. What is happening here is that `ips` is already defined in `type` but not in `anytype`. For that cases, we have some logic that tries to borrow the mapping definitions from existing types, but it only works for types that are supported in templates currently (numbers, date and string), so it fails with ip addresses.\n\nI will open a PR soon, but you can work around the problem by defining your ip field in the `_default_` mapping so that it will be used by all types.\n", "created_at": "2016-06-13T08:25:49Z" }, { "body": "Actually this was fixed by #17882 so I just backported it.\n", "created_at": "2016-06-13T08:41:59Z" }, { "body": "This will be addressed in the upcoming 2.4.0 release.\n", "created_at": "2016-06-13T08:46:39Z" }, { "body": "@jpountz Thanks for the update! How do I go about defining the ip field in the _default_ mapping?\n", "created_at": "2016-06-13T15:10:07Z" }, { "body": "Is it just a PUT request with:?\n\n{\n \"_default_\": {\n \"_all\": {\n \"type\": \"ip\"\n }\n }\n}\n", "created_at": "2016-06-13T15:12:17Z" }, { "body": "That would be something like this:\n\n```\ncurl -XPUT 'http://localhost:9200/com' -d '{\n \"mappings\": {\n \"_default_\": {\n \"properties\": {\n \"ips\": {\n \"type\": \"ip\"\n }\n }\n }\n }\n}'\n```\n", "created_at": "2016-06-13T15:12:35Z" }, { "body": "thank you very much!\n", "created_at": "2016-06-13T16:21:12Z" }, { "body": "<opening new issue>\n", "created_at": "2016-06-28T12:30:19Z" } ], "number": 18740, "title": "Creating array of IPs" }
{ "body": "Boolean fields were not handled in `DocumentParser.createBuilderFromFieldType`.\nThis also improves the logic a bit by falling back to the default mapping of\nthe given type insteah of hard-coding every case and throws a better exception\nthan a NPE if no dynamic mappings could be created.\n\nCloses #17879\nCloses #18740\n", "number": 17882, "review_comments": [], "title": "Fix cross type mapping updates for `boolean` fields." }
{ "commits": [ { "message": "Fix cross type mapping updates for `boolean` fields. #17882\n\nBoolean fields were not handled in `DocumentParser.createBuilderFromFieldType`.\nThis also improves the logic a bit by falling back to the default mapping of\nthe given type insteah of hard-coding every case and throws a better exception\nthan a NPE if no dynamic mappings could be created.\n\nCloses #17879" } ], "files": [ { "diff": "@@ -22,6 +22,7 @@\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Collections;\n+import java.util.HashMap;\n import java.util.Iterator;\n import java.util.List;\n \n@@ -593,9 +594,6 @@ private static Mapper.Builder<?,?> createBuilderFromFieldType(final ParseContext\n Mapper.Builder builder = null;\n if (fieldType instanceof StringFieldType) {\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"string\", \"string\");\n- if (builder == null) {\n- builder = new StringFieldMapper.Builder(currentFieldName);\n- }\n } else if (fieldType instanceof TextFieldType) {\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"text\", \"string\");\n if (builder == null) {\n@@ -604,45 +602,39 @@ private static Mapper.Builder<?,?> createBuilderFromFieldType(final ParseContext\n }\n } else if (fieldType instanceof KeywordFieldType) {\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"keyword\", \"string\");\n- if (builder == null) {\n- builder = new KeywordFieldMapper.Builder(currentFieldName);\n- }\n } else {\n switch (fieldType.typeName()) {\n- case \"date\":\n+ case DateFieldMapper.CONTENT_TYPE:\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"date\");\n- if (builder == null) {\n- builder = newDateBuilder(currentFieldName, null, Version.indexCreated(context.indexSettings()));\n- }\n break;\n case \"long\":\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"long\");\n- if (builder == null) {\n- builder = newLongBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n- }\n break;\n case \"double\":\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"double\");\n- if (builder == null) {\n- builder = newDoubleBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n- }\n break;\n case \"integer\":\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"integer\");\n- if (builder == null) {\n- builder = newIntBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n- }\n break;\n case \"float\":\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"float\");\n- if (builder == null) {\n- builder = newFloatBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n- }\n+ break;\n+ case BooleanFieldMapper.CONTENT_TYPE:\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"boolean\");\n break;\n default:\n break;\n }\n }\n+ if (builder == null) {\n+ Mapper.TypeParser.ParserContext parserContext = context.docMapperParser().parserContext(currentFieldName);\n+ Mapper.TypeParser typeParser = parserContext.typeParser(fieldType.typeName());\n+ if (typeParser == null) {\n+ throw new MapperParsingException(\"Cannot generate dynamic mappings of type [\" + fieldType.typeName()\n+ + \"] for [\" + currentFieldName + \"]\");\n+ }\n+ builder = typeParser.parse(currentFieldName, new HashMap<>(), parserContext);\n+ }\n return builder;\n }\n \n@@ -654,22 +646,6 @@ private static Mapper.Builder<?,?> createBuilderFromFieldType(final ParseContext\n }\n }\n \n- private static Mapper.Builder<?, ?> newIntBuilder(String name, Version indexCreated) {\n- if (indexCreated.onOrAfter(Version.V_5_0_0)) {\n- return new NumberFieldMapper.Builder(name, NumberFieldMapper.NumberType.INTEGER);\n- } else {\n- return new LegacyIntegerFieldMapper.Builder(name);\n- }\n- }\n-\n- private static Mapper.Builder<?, ?> newDoubleBuilder(String name, Version indexCreated) {\n- if (indexCreated.onOrAfter(Version.V_5_0_0)) {\n- return new NumberFieldMapper.Builder(name, NumberFieldMapper.NumberType.DOUBLE);\n- } else {\n- return new LegacyDoubleFieldMapper.Builder(name);\n- }\n- }\n-\n private static Mapper.Builder<?, ?> newFloatBuilder(String name, Version indexCreated) {\n if (indexCreated.onOrAfter(Version.V_5_0_0)) {\n return new NumberFieldMapper.Builder(name, NumberFieldMapper.NumberType.FLOAT);", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -31,6 +31,8 @@\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.mapper.core.BooleanFieldMapper;\n+import org.elasticsearch.index.mapper.core.BooleanFieldMapper.BooleanFieldType;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n import org.elasticsearch.index.mapper.core.DateFieldMapper.DateFieldType;\n import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n@@ -429,7 +431,8 @@ public void testReuseExistingMappings() throws IOException, Exception {\n \"my_field3\", \"type=long,doc_values=false\",\n \"my_field4\", \"type=float,index=false\",\n \"my_field5\", \"type=double,store=true\",\n- \"my_field6\", \"type=date,doc_values=false\");\n+ \"my_field6\", \"type=date,doc_values=false\",\n+ \"my_field7\", \"type=boolean,doc_values=false\");\n \n // Even if the dynamic type of our new field is long, we already have a mapping for the same field\n // of type string so it should be mapped as a string\n@@ -442,13 +445,15 @@ public void testReuseExistingMappings() throws IOException, Exception {\n .field(\"my_field4\", 45)\n .field(\"my_field5\", 46)\n .field(\"my_field6\", 47)\n+ .field(\"my_field7\", true)\n .endObject());\n Mapper myField1Mapper = null;\n Mapper myField2Mapper = null;\n Mapper myField3Mapper = null;\n Mapper myField4Mapper = null;\n Mapper myField5Mapper = null;\n Mapper myField6Mapper = null;\n+ Mapper myField7Mapper = null;\n for (Mapper m : update) {\n switch (m.name()) {\n case \"my_field1\":\n@@ -469,6 +474,9 @@ public void testReuseExistingMappings() throws IOException, Exception {\n case \"my_field6\":\n myField6Mapper = m;\n break;\n+ case \"my_field7\":\n+ myField7Mapper = m;\n+ break;\n }\n }\n assertNotNull(myField1Mapper);\n@@ -502,6 +510,10 @@ public void testReuseExistingMappings() throws IOException, Exception {\n assertTrue(myField6Mapper instanceof DateFieldMapper);\n assertFalse(((DateFieldType) ((DateFieldMapper) myField6Mapper).fieldType()).hasDocValues());\n \n+ assertNotNull(myField7Mapper);\n+ assertTrue(myField7Mapper instanceof BooleanFieldMapper);\n+ assertFalse(((BooleanFieldType) ((BooleanFieldMapper) myField7Mapper).fieldType()).hasDocValues());\n+\n // This can't work\n try {\n parse(newMapper, indexService.mapperService().documentMapperParser(),", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingTests.java", "status": "modified" } ] }
{ "body": "The two write methods are defined like this:\n\n```\n public void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException {\n final Path file = path.resolve(blobName);\n try (OutputStream outputStream = Files.newOutputStream(file)) {\n ...\n```\n\nPassing nothing to Files.newOutputStream will truncate the output file if its already there:\n\n```\nIf no options are present then this method works as if the CREATE, TRUNCATE_EXISTING, and WRITE options are present. In other words, it opens the file for writing, creating the file if it doesn't exist, or initially truncating an existing regular-file to a size of 0 if it exists. \n```\n\nCan we please pass `StandardOpenOptions.CREATE_NEW` so that silent data truncation never happens? \n", "comments": [ { "body": "> Can we please pass StandardOpenOptions.CREATE_NEW so that silent data truncation never happens?\n\n+100\n", "created_at": "2015-12-21T20:13:24Z" } ], "number": 15579, "title": "FSBlobContainer leniently truncates" }
{ "body": "This commit contains the following:\n1. Clarifies the behavior that must be adhered to by any implementors\n of the BlobContainer interface. This is done through expanded Javadocs.\n2. BlobContainer#writeBlob cannot overwrite an already existing blob.\n It will now throw an exception if trying to write to a pre-existing\n file.\n\nCloses #15579\nCloses #15580\n", "number": 17878, "review_comments": [], "title": "Fix the semantics for the BlobContainer interface" }
{ "commits": [ { "message": "Fix the semantics for the BlobContainer interface\n\nThis commit contains the following:\n 1. Clarifies the behavior that must be adhered to by any implementors\nof the BlobContainer interface. This is done through expanded Javadocs.\n 2. BlobContainer#writeBlob cannot overwrite an already existing blob.\nIt will now throw an exception if trying to write to a pre-existing\nfile.\n\nCloses #15579\nCloses #15580" } ], "files": [ { "diff": "@@ -27,60 +27,127 @@\n import java.util.Map;\n \n /**\n- *\n+ * An interface for managing a repository of blob entries, where each blob entry is just a named group of bytes.\n */\n public interface BlobContainer {\n \n+ /**\n+ * Gets the {@link BlobPath} that defines the implementation specific paths to where the blobs are contained.\n+ *\n+ * @return the BlobPath where the blobs are contained\n+ */\n BlobPath path();\n \n+ /**\n+ * Tests whether a blob with the given blob name exists in the container.\n+ *\n+ * @param blobName\n+ * The name of the blob whose existence is to be determined.\n+ * @return {@code true} if a blob exists in the {@link BlobContainer} with the given name, and {@code false} otherwise.\n+ */\n boolean blobExists(String blobName);\n \n /**\n- * Creates a new InputStream for the given blob name\n+ * Creates a new {@link InputStream} for the given blob name.\n+ *\n+ * @param blobName\n+ * The name of the blob to get an {@link InputStream} for.\n+ * @return The {@code InputStream} to read the blob.\n+ * @throws IOException if the blob does not exist or can not be read.\n */\n InputStream readBlob(String blobName) throws IOException;\n \n /**\n- * Reads blob content from the input stream and writes it to the blob store\n+ * Reads blob content from the input stream and writes it to the container in a new blob with the given name.\n+ * This method assumes the container does not already contain a blob of the same blobName. If a blob by the\n+ * same name already exists, the operation will fail and an {@link IOException} will be thrown.\n+ *\n+ * @param blobName\n+ * The name of the blob to write the contents of the input stream to.\n+ * @param inputStream\n+ * The input stream from which to retrieve the bytes to write to the blob.\n+ * @throws IOException if the input stream could not be read, a blob by the same name already exists,\n+ * or the target blob could not be written to.\n */\n- void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException;\n+ void writeBlob(String blobName, InputStream inputStream) throws IOException;\n \n /**\n- * Writes bytes to the blob\n+ * Writes the input bytes to a new blob in the container with the given name. This method assumes the\n+ * container does not already contain a blob of the same blobName. If a blob by the same name already\n+ * exists, the operation will fail and an {@link IOException} will be thrown.\n+ *\n+ * @param blobName\n+ * The name of the blob to write the contents of the input stream to.\n+ * @param bytes\n+ * The bytes to write to the blob.\n+ * @throws IOException if a blob by the same name already exists, or the target blob could not be written to.\n */\n void writeBlob(String blobName, BytesReference bytes) throws IOException;\n \n /**\n- * Deletes a blob with giving name.\n+ * Deletes a blob with giving name, if the blob exists. If the blob does not exist, this method has no affect.\n *\n- * If a blob exists but cannot be deleted an exception has to be thrown.\n+ * @param blobName\n+ * The name of the blob to delete.\n+ * @throws IOException if the blob exists but could not be deleted.\n */\n void deleteBlob(String blobName) throws IOException;\n \n /**\n- * Deletes blobs with giving names.\n+ * Deletes blobs with the given names. If any subset of the names do not exist in the container, this method has no\n+ * affect for those names, and will delete the blobs for those names that do exist. If any of the blobs failed\n+ * to delete, those blobs that were processed before it and successfully deleted will remain deleted. An exception\n+ * is thrown at the first blob entry that fails to delete (TODO: is this the right behavior? Should we collect\n+ * all the failed deletes into a single IOException instead?)\n *\n- * If a blob exists but cannot be deleted an exception has to be thrown.\n+ * @param blobNames\n+ * The collection of blob names to delete from the container.\n+ * @throws IOException if any of the blobs in the collection exists but could not be deleted.\n */\n void deleteBlobs(Collection<String> blobNames) throws IOException;\n \n /**\n- * Deletes all blobs in the container that match the specified prefix.\n+ * Deletes all blobs in the container that match the specified prefix. If any of the blobs failed to delete,\n+ * those blobs that were processed before it and successfully deleted will remain deleted. An exception is\n+ * thrown at the first blob entry that fails to delete (TODO: is this the right behavior? Should we collect\n+ * all the failed deletes into a single IOException instead?)\n+ *\n+ * @param blobNamePrefix\n+ * The prefix to match against blob names in the container. Any blob whose name has the prefix will be deleted.\n+ * @throws IOException if any of the matching blobs failed to delete.\n */\n void deleteBlobsByPrefix(String blobNamePrefix) throws IOException;\n \n /**\n- * Lists all blobs in the container\n+ * Lists all blobs in the container.\n+ *\n+ * @return A map of all the blobs in the container. The keys in the map are the names of the blobs and\n+ * the values are {@link BlobMetaData}, containing basic information about each blob.\n+ * @throws IOException if there were any failures in reading from the blob container.\n */\n Map<String, BlobMetaData> listBlobs() throws IOException;\n \n /**\n- * Lists all blobs in the container that match specified prefix\n+ * Lists all blobs in the container that match the specified prefix.\n+ *\n+ * @param blobNamePrefix\n+ * The prefix to match against blob names in the container.\n+ * @return A map of the matching blobs in the container. The keys in the map are the names of the blobs\n+ * and the values are {@link BlobMetaData}, containing basic information about each blob.\n+ * @throws IOException if there were any failures in reading from the blob container.\n */\n Map<String, BlobMetaData> listBlobsByPrefix(String blobNamePrefix) throws IOException;\n \n /**\n- * Atomically renames source blob into target blob\n+ * Atomically renames the source blob into the target blob. If the source blob does not exist or the\n+ * target blob already exists, an exception is thrown.\n+ *\n+ * @param sourceBlobName\n+ * The blob to rename.\n+ * @param targetBlobName\n+ * The name of the blob after the renaming.\n+ * @throws IOException if the source blob does not exist, the target blob already exists,\n+ * or there were any failures in reading from the blob container.\n */\n void move(String sourceBlobName, String targetBlobName) throws IOException;\n }", "filename": "core/src/main/java/org/elasticsearch/common/blobstore/BlobContainer.java", "status": "modified" }, { "diff": "@@ -20,11 +20,17 @@\n package org.elasticsearch.common.blobstore;\n \n /**\n- *\n+ * An interface for providing basic metadata about a blob.\n */\n public interface BlobMetaData {\n \n+ /**\n+ * Gets the name of the blob.\n+ */\n String name();\n \n+ /**\n+ * Gets the size of the blob in bytes.\n+ */\n long length();\n }", "filename": "core/src/main/java/org/elasticsearch/common/blobstore/BlobMetaData.java", "status": "modified" }, { "diff": "@@ -19,14 +19,13 @@\n \n package org.elasticsearch.common.blobstore;\n \n-\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.Iterator;\n import java.util.List;\n \n /**\n- *\n+ * The list of paths where a blob can reside. The contents of the paths are dependent upon the implementation of {@link BlobContainer}.\n */\n public class BlobPath implements Iterable<String> {\n ", "filename": "core/src/main/java/org/elasticsearch/common/blobstore/BlobPath.java", "status": "modified" }, { "diff": "@@ -22,12 +22,18 @@\n import java.io.IOException;\n \n /**\n- *\n+ * An interface for storing blobs.\n */\n public interface BlobStore extends Closeable {\n \n+ /**\n+ * Get a blob container instance for storing blobs at the given {@link BlobPath}.\n+ */\n BlobContainer blobContainer(BlobPath path);\n \n+ /**\n+ * Delete the blob store at the given {@link BlobPath}.\n+ */\n void delete(BlobPath path) throws IOException;\n \n }", "filename": "core/src/main/java/org/elasticsearch/common/blobstore/BlobStore.java", "status": "modified" }, { "diff": "@@ -31,17 +31,24 @@\n import java.io.InputStream;\n import java.io.OutputStream;\n import java.nio.file.DirectoryStream;\n+import java.nio.file.FileAlreadyExistsException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.nio.file.StandardCopyOption;\n+import java.nio.file.StandardOpenOption;\n import java.nio.file.attribute.BasicFileAttributes;\n import java.util.HashMap;\n import java.util.Map;\n \n import static java.util.Collections.unmodifiableMap;\n \n /**\n+ * A file system based implementation of {@link org.elasticsearch.common.blobstore.BlobContainer}.\n+ * All blobs in the container are stored on a file system, the location of which is specified by the {@link BlobPath}.\n *\n+ * Note that the methods in this implementation of {@link org.elasticsearch.common.blobstore.BlobContainer} may\n+ * additionally throw a {@link java.lang.SecurityException} if the configured {@link java.lang.SecurityManager}\n+ * does not permit read and/or write access to the underlying files.\n */\n public class FsBlobContainer extends AbstractBlobContainer {\n \n@@ -94,10 +101,10 @@ public InputStream readBlob(String name) throws IOException {\n }\n \n @Override\n- public void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException {\n+ public void writeBlob(String blobName, InputStream inputStream) throws IOException {\n final Path file = path.resolve(blobName);\n- // TODO: why is this not specifying CREATE_NEW? Do we really need to be able to truncate existing files?\n- try (OutputStream outputStream = Files.newOutputStream(file)) {\n+ assert blobExists(blobName) == false; // if we want to write a blob to a pre-existing file, it should be deleted first\n+ try (OutputStream outputStream = Files.newOutputStream(file, StandardOpenOption.CREATE_NEW)) {\n Streams.copy(inputStream, outputStream, new byte[blobStore.bufferSizeInBytes()]);\n }\n IOUtils.fsync(file, false);\n@@ -109,8 +116,11 @@ public void move(String source, String target) throws IOException {\n Path sourcePath = path.resolve(source);\n Path targetPath = path.resolve(target);\n // If the target file exists then Files.move() behaviour is implementation specific\n- // the existing file might be replaced or this method fails by throwing an IOException.\n- assert !Files.exists(targetPath);\n+ // the existing file might be replaced or this method fails by throwing an IOException,\n+ // so we explicitly check here.\n+ if (Files.exists(targetPath)) {\n+ throw new FileAlreadyExistsException(targetPath.toAbsolutePath().toString());\n+ }\n Files.move(sourcePath, targetPath, StandardCopyOption.ATOMIC_MOVE);\n IOUtils.fsync(path, true);\n }", "filename": "core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java", "status": "modified" }, { "diff": "@@ -30,7 +30,7 @@\n import java.util.Map;\n \n /**\n- *\n+ * A base abstract blob container that implements higher level container methods.\n */\n public abstract class AbstractBlobContainer implements BlobContainer {\n \n@@ -55,15 +55,15 @@ public void deleteBlobsByPrefix(final String blobNamePrefix) throws IOException\n \n @Override\n public void deleteBlobs(Collection<String> blobNames) throws IOException {\n- for(String blob: blobNames) {\n+ for (String blob: blobNames) {\n deleteBlob(blob);\n }\n }\n- \n+\n @Override\n public void writeBlob(String blobName, BytesReference bytes) throws IOException {\n try (InputStream stream = bytes.streamInput()) {\n- writeBlob(blobName, stream, bytes.length());\n+ writeBlob(blobName, stream);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/common/blobstore/support/AbstractBlobContainer.java", "status": "modified" }, { "diff": "@@ -104,7 +104,7 @@ public InputStream readBlob(String name) throws IOException {\n }\n \n @Override\n- public void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException {\n+ public void writeBlob(String blobName, InputStream inputStream) throws IOException {\n throw new UnsupportedOperationException(\"URL repository doesn't support this operation\");\n }\n ", "filename": "core/src/main/java/org/elasticsearch/common/blobstore/url/URLBlobContainer.java", "status": "modified" }, { "diff": "@@ -649,7 +649,7 @@ private void snapshotFile(final BlobStoreIndexShardSnapshot.FileInfo fileInfo) t\n final InputStreamIndexInput inputStreamIndexInput = new InputStreamIndexInput(indexInput, partBytes);\n InputStream inputStream = snapshotRateLimiter == null ? inputStreamIndexInput : new RateLimitingInputStream(inputStreamIndexInput, snapshotRateLimiter, snapshotThrottleListener);\n inputStream = new AbortableInputStream(inputStream, fileInfo.physicalName());\n- blobContainer.writeBlob(fileInfo.partName(i), inputStream, partBytes);\n+ blobContainer.writeBlob(fileInfo.partName(i), inputStream);\n }\n Store.verify(indexInput);\n snapshotStatus.addProcessedFile(fileInfo.length());", "filename": "core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java", "status": "modified" }, { "diff": "@@ -283,6 +283,7 @@ protected void randomCorruption(BlobContainer blobContainer, String blobName) th\n int location = randomIntBetween(0, buffer.length - 1);\n buffer[location] = (byte) (buffer[location] ^ 42);\n } while (originalChecksum == checksum(buffer));\n+ blobContainer.deleteBlob(blobName); // delete original before writing new blob\n blobContainer.writeBlob(blobName, new BytesArray(buffer));\n }\n ", "filename": "core/src/test/java/org/elasticsearch/snapshots/BlobStoreFormatIT.java", "status": "modified" }, { "diff": "@@ -54,8 +54,8 @@ public InputStream readBlob(String name) throws IOException {\n }\n \n @Override\n- public void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException {\n- delegate.writeBlob(blobName, inputStream, blobSize);\n+ public void writeBlob(String blobName, InputStream inputStream) throws IOException {\n+ delegate.writeBlob(blobName, inputStream);\n }\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/snapshots/mockstore/BlobContainerWrapper.java", "status": "modified" }, { "diff": "@@ -341,9 +341,9 @@ public void writeBlob(String blobName, BytesReference bytes) throws IOException\n }\n \n @Override\n- public void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException {\n+ public void writeBlob(String blobName, InputStream inputStream) throws IOException {\n maybeIOExceptionOrBlock(blobName);\n- super.writeBlob(blobName, inputStream, blobSize);\n+ super.writeBlob(blobName, inputStream);\n }\n }\n }", "filename": "core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java", "status": "modified" }, { "diff": "@@ -111,5 +111,24 @@ public void testMoveAndList() throws IOException {\n }\n }\n \n+ public void testOverwriteFails() throws IOException {\n+ try (final BlobStore store = newBlobStore()) {\n+ final String blobName = \"foobar\";\n+ final BlobContainer container = store.blobContainer(new BlobPath());\n+ byte[] data = randomBytes(randomIntBetween(10, scaledRandomIntBetween(1024, 1 << 16)));\n+ final BytesArray bytesArray = new BytesArray(data);\n+ container.writeBlob(blobName, bytesArray);\n+ // should not be able to write to the same blob again\n+ try {\n+ container.writeBlob(blobName, bytesArray);\n+ fail(\"Cannot overwrite existing blob\");\n+ } catch (AssertionError e) {\n+ // we want to come here\n+ }\n+ container.deleteBlob(blobName);\n+ container.writeBlob(blobName, bytesArray); // deleted it, so should be able to write it again\n+ }\n+ }\n+\n protected abstract BlobStore newBlobStore() throws IOException;\n }", "filename": "core/src/test/java/org/elasticsearch/test/ESBlobStoreContainerTestCase.java", "status": "modified" }, { "diff": "@@ -85,7 +85,7 @@ public InputStream readBlob(String blobName) throws IOException {\n }\n \n @Override\n- public void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException {\n+ public void writeBlob(String blobName, InputStream inputStream) throws IOException {\n try (OutputStream stream = createOutput(blobName)) {\n Streams.copy(inputStream, stream);\n }", "filename": "plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobContainer.java", "status": "modified" }, { "diff": "@@ -103,7 +103,7 @@ public InputStream run(FileContext fileContext) throws IOException {\n }\n \n @Override\n- public void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException {\n+ public void writeBlob(String blobName, InputStream inputStream) throws IOException {\n store.execute(new Operation<Void>() {\n @Override\n public Void run(FileContext fileContext) throws IOException {\n@@ -154,4 +154,4 @@ public boolean accept(Path path) {\n public Map<String, BlobMetaData> listBlobs() throws IOException {\n return listBlobsByPrefix(null);\n }\n-}\n\\ No newline at end of file\n+}", "filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobContainer.java", "status": "modified" }, { "diff": "@@ -97,7 +97,7 @@ public InputStream readBlob(String blobName) throws IOException {\n }\n \n @Override\n- public void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException {\n+ public void writeBlob(String blobName, InputStream inputStream) throws IOException {\n try (OutputStream stream = createOutput(blobName)) {\n Streams.copy(inputStream, stream);\n }", "filename": "plugins/repository-s3/src/main/java/org/elasticsearch/cloud/aws/blobstore/S3BlobContainer.java", "status": "modified" } ] }
{ "body": "I am not yet a hundred percent sure if this is a bug or a feature... After yesterdays mapping refactoring there is a discrepancy in the mapping being returned between master and alpha1\n\nTo reproduce simply run the below snippet against both versions\n\n``` bash\nport=9200\ncurl -X DELETE localhost:$port/_template/my-template\ncurl -X PUT localhost:$port/_template/my-template -d '{\n \"template\": \"my-index\",\n \"mappings\": {\n \"my-type\": {\n \"dynamic_templates\": [\n {\n \"disabled_payload_fields\": {\n \"path_match\": \"transform(\\\\..+)*\\\\.payload\",\n \"match_pattern\": \"regex\",\n \"mapping\": {\n \"type\": \"object\",\n \"enabled\": false\n }\n }\n }\n ],\n \"dynamic\": false,\n \"properties\": {\n \"transform\" : {\n \"type\" : \"object\",\n \"dynamic\": true\n }\n }\n }\n }\n}'\n\ncurl -X DELETE localhost:$port/my-index\ncurl -X PUT localhost:$port/my-index/my-type/1 -d '{\n \"name\" : \"foo\",\n \"transform\" : {\n \"foo\" : {\n \"status\" : \"the-state\" ,\n \"payload\" : { \"_value\" : \"the payload\" }\n }\n }\n}'\n\ncurl localhost:$port/my-index/my-type/_mapping?pretty\n```\n\nOn master the last mapping call returns\n\n``` json\n{\n \"my-index\" : {\n \"mappings\" : {\n \"my-type\" : {\n \"dynamic\" : \"false\",\n \"dynamic_templates\" : [ {\n \"disabled_payload_fields\" : {\n \"path_match\" : \"transform(\\\\..+)*\\\\.payload\",\n \"match_pattern\" : \"regex\",\n \"mapping\" : {\n \"enabled\" : false,\n \"type\" : \"object\"\n }\n }\n } ],\n \"properties\" : {\n \"transform\" : {\n \"dynamic\" : \"true\",\n \"properties\" : {\n \"foo\" : {\n \"type\" : \"object\"\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nwhere as on alpha1 this is returned (which looks correct to me, as the mapping is supposed to be dynamic)\n\n``` json\n{\n \"my-index\" : {\n \"mappings\" : {\n \"my-type\" : {\n \"dynamic\" : \"false\",\n \"dynamic_templates\" : [ {\n \"disabled_payload_fields\" : {\n \"path_match\" : \"transform(\\\\..+)*\\\\.payload\",\n \"match_pattern\" : \"regex\",\n \"mapping\" : {\n \"enabled\" : false,\n \"type\" : \"object\"\n }\n }\n } ],\n \"properties\" : {\n \"transform\" : {\n \"dynamic\" : \"true\",\n \"properties\" : {\n \"foo\" : {\n \"dynamic\" : \"true\",\n \"properties\" : {\n \"payload\" : {\n \"type\" : \"object\",\n \"enabled\" : false\n },\n \"status\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"keyword\" : {\n \"type\" : \"keyword\",\n \"ignore_above\" : 256\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n}\n```\n", "comments": [ { "body": "I think this is related to #17759 cc @rjernst.\n", "created_at": "2016-04-19T13:29:59Z" }, { "body": "Is it that, or https://github.com/elastic/elasticsearch/issues/17644 ?\n", "created_at": "2016-04-19T17:02:35Z" }, { "body": "I think it is both. In #17759 I removed a single line of code which would have been making this \"work\" for this specific use case, but would have been broken in others. So it is correct that the root problem is #17644.\n\nI'm working on a change to rework how dynamic is looked up while parsing. \n", "created_at": "2016-04-19T17:13:55Z" } ], "number": 17854, "title": "Mappings in master yield different results compared to alpha1" }
{ "body": "This change fixes the lookup during document parsing of whether an\nobject field is dynamic to handle looking up through parent object\nmappers, and also handle when a parent object mapper was created during\ndocument parsing.\n\ncloses #17854 \ncloses #17644\n", "number": 17864, "review_comments": [], "title": "Fix dynamic check to properly handle parents" }
{ "commits": [ { "message": "Mappings: Fix dynamic check to properly handle parents\n\nThis change fixes the lookup during document parsing of whether an\nobject field is dynamic to handle looking up through parent object\nmappers, and also handle when a parent object mapper was created during\ndocument parsing.\n\ncloses #17854" } ], "files": [ { "diff": "@@ -480,7 +480,7 @@ private static ObjectMapper parseObject(final ParseContext context, ObjectMapper\n if (objectMapper != null) {\n parseObjectOrField(context, objectMapper);\n } else {\n- ObjectMapper.Dynamic dynamic = dynamicOrDefault(mapper, context.root().dynamic());\n+ ObjectMapper.Dynamic dynamic = dynamicOrDefault(mapper, context);\n if (dynamic == ObjectMapper.Dynamic.STRICT) {\n throw new StrictDynamicMappingException(mapper.fullPath(), currentFieldName);\n } else if (dynamic == ObjectMapper.Dynamic.TRUE) {\n@@ -519,7 +519,7 @@ private static void parseArray(ParseContext context, ObjectMapper parentMapper,\n }\n } else {\n \n- ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context.root().dynamic());\n+ ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context);\n if (dynamic == ObjectMapper.Dynamic.STRICT) {\n throw new StrictDynamicMappingException(parentMapper.fullPath(), arrayFieldName);\n } else if (dynamic == ObjectMapper.Dynamic.TRUE) {\n@@ -794,7 +794,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n }\n \n private static void parseDynamicValue(final ParseContext context, ObjectMapper parentMapper, String currentFieldName, XContentParser.Token token) throws IOException {\n- ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context.root().dynamic());\n+ ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context);\n if (dynamic == ObjectMapper.Dynamic.STRICT) {\n throw new StrictDynamicMappingException(parentMapper.fullPath(), currentFieldName);\n }\n@@ -867,7 +867,7 @@ private static void parseCopy(String field, ParseContext context) throws IOExcep\n mapper = context.docMapper().objectMappers().get(context.path().pathAsText(paths[i]));\n if (mapper == null) {\n // One mapping is missing, check if we are allowed to create a dynamic one.\n- ObjectMapper.Dynamic dynamic = dynamicOrDefault(parent, context.root().dynamic());\n+ ObjectMapper.Dynamic dynamic = dynamicOrDefault(parent, context);\n \n switch (dynamic) {\n case STRICT:\n@@ -899,10 +899,26 @@ private static void parseCopy(String field, ParseContext context) throws IOExcep\n }\n }\n \n- private static ObjectMapper.Dynamic dynamicOrDefault(ObjectMapper parentMapper, ObjectMapper.Dynamic dynamicDefault) {\n+ // find what the dynamic setting is given the current parse context and parent\n+ private static ObjectMapper.Dynamic dynamicOrDefault(ObjectMapper parentMapper, ParseContext context) {\n ObjectMapper.Dynamic dynamic = parentMapper.dynamic();\n+ while (dynamic == null) {\n+ int lastDotNdx = parentMapper.name().lastIndexOf('.');\n+ if (lastDotNdx == -1) {\n+ // no dot means we the parent is the root, so just delegate to the default outside the loop\n+ break;\n+ }\n+ String parentName = parentMapper.name().substring(0, lastDotNdx);\n+ parentMapper = context.docMapper().objectMappers().get(parentName);\n+ if (parentMapper == null) {\n+ // If parentMapper is ever null, it means the parent of the current mapper was dynamically created.\n+ // But in order to be created dynamically, the dynamic setting of that parent was necessarily true\n+ return ObjectMapper.Dynamic.TRUE;\n+ }\n+ dynamic = parentMapper.dynamic();\n+ }\n if (dynamic == null) {\n- return dynamicDefault == null ? ObjectMapper.Dynamic.TRUE : dynamicDefault;\n+ return context.root().dynamic() == null ? ObjectMapper.Dynamic.TRUE : context.root().dynamic();\n }\n return dynamic;\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -98,6 +98,65 @@ public void testDotsWithExistingMapper() throws Exception {\n assertEquals(\"789\", values[2]);\n }\n \n+ public void testPropagateDynamicWithExistingMapper() throws Exception {\n+ DocumentMapperParser mapperParser = createIndex(\"test\").mapperService().documentMapperParser();\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .field(\"dynamic\", false)\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"object\")\n+ .field(\"dynamic\", true)\n+ .startObject(\"properties\")\n+ .endObject().endObject().endObject().endObject().string();\n+ DocumentMapper mapper = mapperParser.parse(\"type\", new CompressedXContent(mapping));\n+ BytesReference bytes = XContentFactory.jsonBuilder()\n+ .startObject().startObject(\"foo\")\n+ .field(\"bar\", \"something\")\n+ .endObject().endObject().bytes();\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n+ assertNotNull(doc.dynamicMappingsUpdate());\n+ assertNotNull(doc.rootDoc().getField(\"foo.bar\"));\n+ }\n+\n+ public void testPropagateDynamicWithDynamicMapper() throws Exception {\n+ DocumentMapperParser mapperParser = createIndex(\"test\").mapperService().documentMapperParser();\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .field(\"dynamic\", false)\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"object\")\n+ .field(\"dynamic\", true)\n+ .startObject(\"properties\")\n+ .endObject().endObject().endObject().endObject().string();\n+ DocumentMapper mapper = mapperParser.parse(\"type\", new CompressedXContent(mapping));\n+ BytesReference bytes = XContentFactory.jsonBuilder()\n+ .startObject().startObject(\"foo\").startObject(\"bar\")\n+ .field(\"baz\", \"something\")\n+ .endObject().endObject().endObject().bytes();\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n+ assertNotNull(doc.dynamicMappingsUpdate());\n+ assertNotNull(doc.rootDoc().getField(\"foo.bar.baz\"));\n+ }\n+\n+ public void testDynamicRootFallback() throws Exception {\n+ DocumentMapperParser mapperParser = createIndex(\"test\").mapperService().documentMapperParser();\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .field(\"dynamic\", false)\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .endObject().endObject().endObject().endObject().string();\n+ DocumentMapper mapper = mapperParser.parse(\"type\", new CompressedXContent(mapping));\n+ BytesReference bytes = XContentFactory.jsonBuilder()\n+ .startObject().startObject(\"foo\")\n+ .field(\"bar\", \"something\")\n+ .endObject().endObject().bytes();\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n+ assertNull(doc.dynamicMappingsUpdate());\n+ assertNull(doc.rootDoc().getField(\"foo.bar\"));\n+ }\n+\n DocumentMapper createDummyMapping(MapperService mapperService) throws Exception {\n String mapping = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n .startObject(\"y\").field(\"type\", \"object\").endObject()", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**:\n1.x, 2.x and master\n\n**Description of the problem including expected versus actual behavior**:\nCreate an index and specify `dynamic: false` (or `strict`) for the only type it holds. Create a subfield of type object and specify `dynamic: true` for it. When creating a new object field under the dynamic one, it gets a different dynamic default depending on whether it was added via put mapping api (apparently taken from the main dynamic behaviour of the type) or dynamically created through index api (`true`, maybe ok because it was created dynamically). I would expect the new field to get the same default regardless of how it got added in the first place. I am not sure if its default value should be taken from the type of from its ancestor field.\n\n**Steps to reproduce**:\n- Create the index\n\n```\ncurl -XPUT localhost:9200/index1 -d '{\n \"mappings\": {\n \"strict_type\": {\n \"dynamic\": \"strict\",\n \"properties\": {\n \"dynamic_field\": {\n \"dynamic\": \"true\",\n \"type\": \"object\"\n }\n }\n }\n }\n}'\n```\n- Add a new field under `dynamic_field` using the put mapping api:\n\n```\ncurl -XPOST localhost:9200/index1/strict_type/_mapping -d '{\n \"properties\": {\n \"dynamic_field\": {\n \"properties\": {\n \"subobject\": {\n \"type\" : \"object\"\n }\n }\n }\n }\n}\n'\n```\n- Retrieve the mapping and verify that `subobject` gets the default dynamic behaviour of the type.\n\n```\ncurl localhost:9200/_mapping?pretty\n\n{\n \"index1\" : {\n \"mappings\" : {\n \"strict_type\" : {\n \"dynamic\" : \"strict\",\n \"properties\" : {\n \"dynamic_field\" : {\n \"dynamic\" : \"true\",\n \"properties\" : {\n \"subobject\" : {\n \"type\" : \"object\"\n }\n }\n }\n }\n }\n }\n }\n}\n```\n- Also try and index a document with a new field under `subobject`, it gets rejected due strict mapping:\n\n```\ncurl -XPUT localhost:9200/index1/strict_type/1 -d '{\n \"dynamic_field\" : {\n \"subobject\" : {\n \"field2\" : 123 \n }\n }\n}\n'\n\n{\"error\":\"StrictDynamicMappingException[mapping set to strict, dynamic introduction of [field2] within [dynamic_field.subobject] is not allowed]\",\"status\":400}\n```\n- Index a new document containing a new subobject field under `dynamic_field`:\n\n```\ncurl -XPUT localhost:9200/index1/strict_type/1 -d '{\n \"dynamic_field\" : {\n \"subobject2\" : {\n }\n }\n}\n'\n```\n- Retrieve the mapping and verify that `subobject2` has `dynamic` set to `true`, omitting its `dynamic` behaviour\n\n```\ncurl localhost:9200/_mapping?pretty\n\n{\n \"index1\" : {\n \"mappings\" : {\n \"strict_type\" : {\n \"dynamic\" : \"strict\",\n \"properties\" : {\n \"dynamic_field\" : {\n \"dynamic\" : \"true\",\n \"properties\" : {\n \"subobject\" : {\n \"type\" : \"object\"\n },\n \"subobject2\" : {\n \"type\" : \"object\",\n \"dynamic\" : \"true\"\n }\n }\n }\n }\n }\n }\n }\n}\n```\n", "comments": [ { "body": "I suspect this is done on purpose since 99% of the time, if you add a dynamic object to your mappings, you will have a field below it.\n\nThat said I agree it is confusing that the default depends on the root as opposed to the parent.\n", "created_at": "2016-04-11T14:30:00Z" }, { "body": "However the sub-object gets added, if `dynamic` is not explicitly specified at the sub-object level, it should inherit from its direct parent.\n", "created_at": "2016-04-13T09:12:12Z" }, { "body": "This raises another problem... The `dynamic` setting is updatable. If we update the setting for a parent, the children should inherit the new setting. I think this will not work today as the inherited dynamic setting is resolved when the field is added.\n", "created_at": "2016-04-13T09:20:26Z" }, { "body": "> The dynamic setting is updatable. If we update the setting for a parent, the children should inherit the new setting. I think this will not work today as the inherited dynamic setting is resolved when the field is added.\n\nI think that could work actually. The way that the dynamic setting is resolved is the following:\n1. if `dynamic` is set on the object, then use it\n2. otherwise if `dynamic` is set on the root object, then use it\n3. otherwise use the default which is `true`\n\nSo if we replace (2) with \"otherwise recursively check if `dynamic` is set on the parent object, and use it\" and dynamically add objects with `null` values for `dynamic` then things should work?\n", "created_at": "2016-04-14T13:13:18Z" }, { "body": "Just tested this on master, it is fixed. No inconsistency anymore depending on how the field gets added. We look at the parent and take the `dynamic` value from it. Closing.\n", "created_at": "2016-04-20T16:21:25Z" } ], "number": 17644, "title": "Inconsistent dynamic default depending on how a field gets added" }
{ "body": "This change fixes the lookup during document parsing of whether an\nobject field is dynamic to handle looking up through parent object\nmappers, and also handle when a parent object mapper was created during\ndocument parsing.\n\ncloses #17854 \ncloses #17644\n", "number": 17864, "review_comments": [], "title": "Fix dynamic check to properly handle parents" }
{ "commits": [ { "message": "Mappings: Fix dynamic check to properly handle parents\n\nThis change fixes the lookup during document parsing of whether an\nobject field is dynamic to handle looking up through parent object\nmappers, and also handle when a parent object mapper was created during\ndocument parsing.\n\ncloses #17854" } ], "files": [ { "diff": "@@ -480,7 +480,7 @@ private static ObjectMapper parseObject(final ParseContext context, ObjectMapper\n if (objectMapper != null) {\n parseObjectOrField(context, objectMapper);\n } else {\n- ObjectMapper.Dynamic dynamic = dynamicOrDefault(mapper, context.root().dynamic());\n+ ObjectMapper.Dynamic dynamic = dynamicOrDefault(mapper, context);\n if (dynamic == ObjectMapper.Dynamic.STRICT) {\n throw new StrictDynamicMappingException(mapper.fullPath(), currentFieldName);\n } else if (dynamic == ObjectMapper.Dynamic.TRUE) {\n@@ -519,7 +519,7 @@ private static void parseArray(ParseContext context, ObjectMapper parentMapper,\n }\n } else {\n \n- ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context.root().dynamic());\n+ ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context);\n if (dynamic == ObjectMapper.Dynamic.STRICT) {\n throw new StrictDynamicMappingException(parentMapper.fullPath(), arrayFieldName);\n } else if (dynamic == ObjectMapper.Dynamic.TRUE) {\n@@ -794,7 +794,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n }\n \n private static void parseDynamicValue(final ParseContext context, ObjectMapper parentMapper, String currentFieldName, XContentParser.Token token) throws IOException {\n- ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context.root().dynamic());\n+ ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context);\n if (dynamic == ObjectMapper.Dynamic.STRICT) {\n throw new StrictDynamicMappingException(parentMapper.fullPath(), currentFieldName);\n }\n@@ -867,7 +867,7 @@ private static void parseCopy(String field, ParseContext context) throws IOExcep\n mapper = context.docMapper().objectMappers().get(context.path().pathAsText(paths[i]));\n if (mapper == null) {\n // One mapping is missing, check if we are allowed to create a dynamic one.\n- ObjectMapper.Dynamic dynamic = dynamicOrDefault(parent, context.root().dynamic());\n+ ObjectMapper.Dynamic dynamic = dynamicOrDefault(parent, context);\n \n switch (dynamic) {\n case STRICT:\n@@ -899,10 +899,26 @@ private static void parseCopy(String field, ParseContext context) throws IOExcep\n }\n }\n \n- private static ObjectMapper.Dynamic dynamicOrDefault(ObjectMapper parentMapper, ObjectMapper.Dynamic dynamicDefault) {\n+ // find what the dynamic setting is given the current parse context and parent\n+ private static ObjectMapper.Dynamic dynamicOrDefault(ObjectMapper parentMapper, ParseContext context) {\n ObjectMapper.Dynamic dynamic = parentMapper.dynamic();\n+ while (dynamic == null) {\n+ int lastDotNdx = parentMapper.name().lastIndexOf('.');\n+ if (lastDotNdx == -1) {\n+ // no dot means we the parent is the root, so just delegate to the default outside the loop\n+ break;\n+ }\n+ String parentName = parentMapper.name().substring(0, lastDotNdx);\n+ parentMapper = context.docMapper().objectMappers().get(parentName);\n+ if (parentMapper == null) {\n+ // If parentMapper is ever null, it means the parent of the current mapper was dynamically created.\n+ // But in order to be created dynamically, the dynamic setting of that parent was necessarily true\n+ return ObjectMapper.Dynamic.TRUE;\n+ }\n+ dynamic = parentMapper.dynamic();\n+ }\n if (dynamic == null) {\n- return dynamicDefault == null ? ObjectMapper.Dynamic.TRUE : dynamicDefault;\n+ return context.root().dynamic() == null ? ObjectMapper.Dynamic.TRUE : context.root().dynamic();\n }\n return dynamic;\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -98,6 +98,65 @@ public void testDotsWithExistingMapper() throws Exception {\n assertEquals(\"789\", values[2]);\n }\n \n+ public void testPropagateDynamicWithExistingMapper() throws Exception {\n+ DocumentMapperParser mapperParser = createIndex(\"test\").mapperService().documentMapperParser();\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .field(\"dynamic\", false)\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"object\")\n+ .field(\"dynamic\", true)\n+ .startObject(\"properties\")\n+ .endObject().endObject().endObject().endObject().string();\n+ DocumentMapper mapper = mapperParser.parse(\"type\", new CompressedXContent(mapping));\n+ BytesReference bytes = XContentFactory.jsonBuilder()\n+ .startObject().startObject(\"foo\")\n+ .field(\"bar\", \"something\")\n+ .endObject().endObject().bytes();\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n+ assertNotNull(doc.dynamicMappingsUpdate());\n+ assertNotNull(doc.rootDoc().getField(\"foo.bar\"));\n+ }\n+\n+ public void testPropagateDynamicWithDynamicMapper() throws Exception {\n+ DocumentMapperParser mapperParser = createIndex(\"test\").mapperService().documentMapperParser();\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .field(\"dynamic\", false)\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"object\")\n+ .field(\"dynamic\", true)\n+ .startObject(\"properties\")\n+ .endObject().endObject().endObject().endObject().string();\n+ DocumentMapper mapper = mapperParser.parse(\"type\", new CompressedXContent(mapping));\n+ BytesReference bytes = XContentFactory.jsonBuilder()\n+ .startObject().startObject(\"foo\").startObject(\"bar\")\n+ .field(\"baz\", \"something\")\n+ .endObject().endObject().endObject().bytes();\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n+ assertNotNull(doc.dynamicMappingsUpdate());\n+ assertNotNull(doc.rootDoc().getField(\"foo.bar.baz\"));\n+ }\n+\n+ public void testDynamicRootFallback() throws Exception {\n+ DocumentMapperParser mapperParser = createIndex(\"test\").mapperService().documentMapperParser();\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .field(\"dynamic\", false)\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .endObject().endObject().endObject().endObject().string();\n+ DocumentMapper mapper = mapperParser.parse(\"type\", new CompressedXContent(mapping));\n+ BytesReference bytes = XContentFactory.jsonBuilder()\n+ .startObject().startObject(\"foo\")\n+ .field(\"bar\", \"something\")\n+ .endObject().endObject().bytes();\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n+ assertNull(doc.dynamicMappingsUpdate());\n+ assertNull(doc.rootDoc().getField(\"foo.bar\"));\n+ }\n+\n DocumentMapper createDummyMapping(MapperService mapperService) throws Exception {\n String mapping = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n .startObject(\"y\").field(\"type\", \"object\").endObject()", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java", "status": "modified" } ] }
{ "body": "If I turn off my wi-fi connection and run elasticsearch, tried with 1.7, 2.3 and 5.0 and it is always the same, the process prints out the following to start with:\n\n```\nlog4j:WARN No appenders could be found for logger (common).\nlog4j:WARN Please initialize the log4j system properly.\nlog4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.\n```\n\nThe `logging.yml` is the default one. After that everything looks ok, usual logging etc.. I guess this is due to the order of initialization of our logging, we may be ok with this initial warning but maybe there is something that we want to do about it.\n", "comments": [ { "body": "This one is fun. The trouble starts in the innocuous looking [line 199 `Bootstrap.java`](https://github.com/elastic/elasticsearch/blob/6941966b1654106ed9249cd01c32bfa2e06b452f/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java#L199):\n\n``` java\n199: if (Strings.hasLength(pidFile)) {\n```\n\nThis triggers class initialization for `Strings` which causes the field `TIME_UUID_GENERATOR` to be initialized on [line 64](https://github.com/elastic/elasticsearch/blob/6941966b1654106ed9249cd01c32bfa2e06b452f/core/src/main/java/org/elasticsearch/common/Strings.java#L64):\n\n``` java\n64: private static final UUIDGenerator TIME_UUID_GENERATOR = new TimeBasedUUIDGenerator();\n```\n\nThis in turn triggers class initialization for `TimeBasedUUIDGenerator` which causes the field `secureMungedAddressed` to be initialized on [line 38](https://github.com/elastic/elasticsearch/blob/6941966b1654106ed9249cd01c32bfa2e06b452f/core/src/main/java/org/elasticsearch/common/TimeBasedUUIDGenerator.java#L38):\n\n``` java\n38: private static final byte[] secureMungedAddress = MacAddressProvider.getSecureMungedAddress();\n```\n\nNow the coup de grâce. The method [`MacAddressProvider#getSecureMungedAddress`](https://github.com/elastic/elasticsearch/blob/6941966b1654106ed9249cd01c32bfa2e06b452f/core/src/main/java/org/elasticsearch/common/MacAddressProvider.java#L64) tries to get a secure MAC address for use in generating the time-based UUIDs. If all the network interfaces are disabled, then there is no MAC address to use so the check on [line 73](https://github.com/elastic/elasticsearch/blob/6941966b1654106ed9249cd01c32bfa2e06b452f/core/src/main/java/org/elasticsearch/common/MacAddressProvider.java#L73) passes:\n\n``` java\n73: if (!isValidAddress(address)) {\n```\n\nwhich leads to the fatal logging statement on [line 74](https://github.com/elastic/elasticsearch/blob/6941966b1654106ed9249cd01c32bfa2e06b452f/core/src/main/java/org/elasticsearch/common/MacAddressProvider.java#L74):\n\n``` java\n74: logger.warn(\"Unable to get a valid mac address, will use a dummy address\");\n```\n\nThis line being fatal because it's invoked _before_ logging has been initialized.\n\nTwo lessons:\n1. `static` is evil, don't do `static`\n2. logging in static methods using a statically-initialized static logger (instead of an instance of `logger` from an `AbstractComponent`) is prone to this problem\n\nI opened #17837.\n", "created_at": "2016-04-18T23:59:03Z" }, { "body": "> This one is fun.\n\nSounds like it, thanks a lot for digging and fixing @jasontedor \n", "created_at": "2016-04-19T17:20:24Z" } ], "number": 17819, "title": "No appenders could be found for logger (common) printed out" }
{ "body": "This commit refactors the UUID-generating methods out of `Strings` into\ntheir own class. The primary motive for this refactoring is to avoid a\nchain of class initializers from loading this class earlier than\nnecessary. This was discovered when it was noticed that starting\nElasticsearch without any active network interfaces leads to some\nlogging statements being executed before logging had been\ninitailized. Thus:\n- these UUID methods have no place being on `Strings`\n- removing them reduces spooky action-at-distance loading of this class\n- removed the troublesome, logging statements from `MacAddressProvider`,\n logging using statically-initialized instances of `ESLogger` are prone\n to this problem\n\nCloses #17819\n", "number": 17837, "review_comments": [ { "body": "Is it ok to have no feedback in this case? I guess given the initialization chain we had we weren't getting any anyway but maybe it'd be nice?\n", "created_at": "2016-04-19T01:20:15Z" } ], "title": "Refactor UUID-generating methods out of Strings" }
{ "commits": [ { "message": "Refactor UUID-generating methods out of Strings\n\nThis commit refactors the UUID-generating methods out of Strings into\ntheir own class. The primary motive for this refactoring is to avoid a\nchain of class initializers from loading this class earlier than\nnecessary. This was discovered when it was noticed that starting\nElasticsearch without any active network interfaces leads to some\nlogging statements being executed before logging had been\ninitailized. Thus:\n - these UUID methods have no place being on Strings\n - removing them reduces spooky action-at-distance loading of this class\n - removed the troublesome, logging statements from MacAddressProvider,\n logging using statically-initialized instances of ESLogger are prone\n to this problem" } ], "files": [ { "diff": "@@ -20,18 +20,16 @@\n package org.elasticsearch.action.index;\n \n import org.elasticsearch.ElasticsearchGenerationException;\n-import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.DocumentRequest;\n import org.elasticsearch.action.RoutingMissingException;\n import org.elasticsearch.action.TimestampParsingException;\n import org.elasticsearch.action.support.replication.ReplicationRequest;\n import org.elasticsearch.client.Requests;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -42,8 +40,6 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentType;\n-import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n \n@@ -613,7 +609,7 @@ public void process(MetaData metaData, @Nullable MappingMetaData mappingMd, bool\n // generate id if not already provided and id generation is allowed\n if (allowIdGeneration) {\n if (id == null) {\n- id(Strings.base64UUID());\n+ id(UUIDs.base64UUID());\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/index/IndexRequest.java", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n@@ -693,7 +694,7 @@ public Builder fromDiff(boolean fromDiff) {\n \n public ClusterState build() {\n if (UNKNOWN_UUID.equals(uuid)) {\n- uuid = Strings.randomBase64UUID();\n+ uuid = UUIDs.randomBase64UUID();\n }\n return new ClusterState(clusterName, version, uuid, metaData, routingTable, nodes, blocks, customs.build(), fromDiff);\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/ClusterState.java", "status": "modified" }, { "diff": "@@ -33,7 +33,7 @@\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.ParseFieldMatcher;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.collect.HppcMaps;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -973,7 +973,7 @@ public Builder clusterUUID(String clusterUUID) {\n \n public Builder generateClusterUuidIfNeeded() {\n if (clusterUUID.equals(\"_na_\")) {\n- clusterUUID = Strings.randomBase64UUID();\n+ clusterUUID = UUIDs.randomBase64UUID();\n }\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.ValidationException;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.compress.CompressedXContent;\n@@ -301,7 +302,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n indexSettingsBuilder.put(SETTING_CREATION_DATE, new DateTime(DateTimeZone.UTC).getMillis());\n }\n \n- indexSettingsBuilder.put(SETTING_INDEX_UUID, Strings.randomBase64UUID());\n+ indexSettingsBuilder.put(SETTING_INDEX_UUID, UUIDs.randomBase64UUID());\n \n Settings actualIndexSettings = indexSettingsBuilder.build();\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -20,9 +20,8 @@\n package org.elasticsearch.cluster.node;\n \n import org.elasticsearch.Version;\n-import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Randomness;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Setting;\n@@ -57,7 +56,7 @@ public DiscoveryNodeService(Settings settings, Version version) {\n \n public static String generateNodeId(Settings settings) {\n Random random = Randomness.get(settings, NODE_ID_SEED_SETTING);\n- return Strings.randomBase64UUID(random);\n+ return UUIDs.randomBase64UUID(random);\n }\n \n public DiscoveryNodeService addCustomAttributeProvider(CustomAttributesProvider customAttributesProvider) {", "filename": "core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeService.java", "status": "modified" }, { "diff": "@@ -23,7 +23,7 @@\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.ParseFieldMatcherSupplier;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.ObjectParser;\n@@ -96,7 +96,7 @@ private AllocationId(String id, String relocationId) {\n * Creates a new allocation id for initializing allocation.\n */\n public static AllocationId newInitializing() {\n- return new AllocationId(Strings.randomBase64UUID(), null);\n+ return new AllocationId(UUIDs.randomBase64UUID(), null);\n }\n \n /**\n@@ -121,7 +121,7 @@ public static AllocationId newTargetRelocation(AllocationId allocationId) {\n */\n public static AllocationId newRelocation(AllocationId allocationId) {\n assert allocationId.getRelocationId() == null;\n- return new AllocationId(allocationId.getId(), Strings.randomBase64UUID());\n+ return new AllocationId(allocationId.getId(), UUIDs.randomBase64UUID());\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/AllocationId.java", "status": "modified" }, { "diff": "@@ -19,18 +19,12 @@\n \n package org.elasticsearch.common;\n \n-import org.elasticsearch.common.logging.ESLogger;\n-import org.elasticsearch.common.logging.Loggers;\n-\n import java.net.NetworkInterface;\n import java.net.SocketException;\n import java.util.Enumeration;\n \n-\n public class MacAddressProvider {\n \n- private static final ESLogger logger = Loggers.getLogger(MacAddressProvider.class);\n-\n private static byte[] getMacAddress() throws SocketException {\n Enumeration<NetworkInterface> en = NetworkInterface.getNetworkInterfaces();\n if (en != null) {\n@@ -66,12 +60,10 @@ public static byte[] getSecureMungedAddress() {\n try {\n address = getMacAddress();\n } catch (Throwable t) {\n- logger.warn(\"Unable to get mac address, will use a dummy address\", t);\n // address will be set below\n }\n \n if (!isValidAddress(address)) {\n- logger.warn(\"Unable to get a valid mac address, will use a dummy address\");\n address = constructDummyMulticastAddress();\n }\n ", "filename": "core/src/main/java/org/elasticsearch/common/MacAddressProvider.java", "status": "modified" }, { "diff": "@@ -60,9 +60,6 @@ public class Strings {\n \n private static final String CURRENT_PATH = \".\";\n \n- private static final RandomBasedUUIDGenerator RANDOM_UUID_GENERATOR = new RandomBasedUUIDGenerator();\n- private static final UUIDGenerator TIME_UUID_GENERATOR = new TimeBasedUUIDGenerator();\n-\n public static void spaceify(int spaces, String from, StringBuilder to) throws Exception {\n try (BufferedReader reader = new BufferedReader(new FastStringReader(from))) {\n String line;\n@@ -1060,24 +1057,6 @@ public static boolean isAllOrWildcard(String[] data) {\n data.length == 1 && (\"_all\".equals(data[0]) || \"*\".equals(data[0]));\n }\n \n- /** Returns a Base64 encoded version of a Version 4.0 compatible UUID as defined here: http://www.ietf.org/rfc/rfc4122.txt, using a\n- * private {@code SecureRandom} instance */\n- public static String randomBase64UUID() {\n- return RANDOM_UUID_GENERATOR.getBase64UUID();\n- }\n-\n- /** Returns a Base64 encoded version of a Version 4.0 compatible UUID as defined here: http://www.ietf.org/rfc/rfc4122.txt, using the\n- * provided {@code Random} instance */\n- public static String randomBase64UUID(Random random) {\n- return RANDOM_UUID_GENERATOR.getBase64UUID(random);\n- }\n-\n- /** Generates a time-based UUID (similar to Flake IDs), which is preferred when generating an ID to be indexed into a Lucene index as\n- * primary key. The id is opaque and the implementation is free to change at any time! */\n- public static String base64UUID() {\n- return TIME_UUID_GENERATOR.getBase64UUID();\n- }\n-\n /**\n * Return a {@link String} that is the json representation of the provided\n * {@link ToXContent}.", "filename": "core/src/main/java/org/elasticsearch/common/Strings.java", "status": "modified" }, { "diff": "@@ -35,10 +35,10 @@ class TimeBasedUUIDGenerator implements UUIDGenerator {\n // Used to ensure clock moves forward:\n private long lastTimestamp;\n \n- private static final byte[] secureMungedAddress = MacAddressProvider.getSecureMungedAddress();\n+ private static final byte[] SECURE_MUNGED_ADDRESS = MacAddressProvider.getSecureMungedAddress();\n \n static {\n- assert secureMungedAddress.length == 6;\n+ assert SECURE_MUNGED_ADDRESS.length == 6;\n }\n \n /** Puts the lower numberOfLongBytes from l into the array, starting index pos. */\n@@ -73,12 +73,12 @@ public String getBase64UUID() {\n putLong(uuidBytes, timestamp, 0, 6);\n \n // MAC address adds 6 bytes:\n- System.arraycopy(secureMungedAddress, 0, uuidBytes, 6, secureMungedAddress.length);\n+ System.arraycopy(SECURE_MUNGED_ADDRESS, 0, uuidBytes, 6, SECURE_MUNGED_ADDRESS.length);\n \n // Sequence number adds 3 bytes:\n putLong(uuidBytes, sequenceId, 12, 3);\n \n- assert 9 + secureMungedAddress.length == uuidBytes.length;\n+ assert 9 + SECURE_MUNGED_ADDRESS.length == uuidBytes.length;\n \n byte[] encoded;\n try {", "filename": "core/src/main/java/org/elasticsearch/common/TimeBasedUUIDGenerator.java", "status": "modified" }, { "diff": "@@ -0,0 +1,47 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common;\n+\n+import java.util.Random;\n+\n+public class UUIDs {\n+\n+ private static final RandomBasedUUIDGenerator RANDOM_UUID_GENERATOR = new RandomBasedUUIDGenerator();\n+ private static final UUIDGenerator TIME_UUID_GENERATOR = new TimeBasedUUIDGenerator();\n+\n+ /** Generates a time-based UUID (similar to Flake IDs), which is preferred when generating an ID to be indexed into a Lucene index as\n+ * primary key. The id is opaque and the implementation is free to change at any time! */\n+ public static String base64UUID() {\n+ return TIME_UUID_GENERATOR.getBase64UUID();\n+ }\n+\n+ /** Returns a Base64 encoded version of a Version 4.0 compatible UUID as defined here: http://www.ietf.org/rfc/rfc4122.txt, using the\n+ * provided {@code Random} instance */\n+ public static String randomBase64UUID(Random random) {\n+ return RANDOM_UUID_GENERATOR.getBase64UUID(random);\n+ }\n+\n+ /** Returns a Base64 encoded version of a Version 4.0 compatible UUID as defined here: http://www.ietf.org/rfc/rfc4122.txt, using a\n+ * private {@code SecureRandom} instance */\n+ public static String randomBase64UUID() {\n+ return RANDOM_UUID_GENERATOR.getBase64UUID();\n+ }\n+\n+}", "filename": "core/src/main/java/org/elasticsearch/common/UUIDs.java", "status": "added" }, { "diff": "@@ -47,7 +47,7 @@\n import org.apache.lucene.util.Version;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.Streams;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n@@ -1363,7 +1363,7 @@ public void deleteQuiet(String... files) {\n public void markStoreCorrupted(IOException exception) throws IOException {\n ensureOpen();\n if (!isMarkedCorrupted()) {\n- String uuid = CORRUPTED + Strings.randomBase64UUID();\n+ String uuid = CORRUPTED + UUIDs.randomBase64UUID();\n try (IndexOutput output = this.directory().createOutput(uuid, IOContext.DEFAULT)) {\n CodecUtil.writeHeader(output, CODEC, VERSION);\n BytesStreamOutput out = new BytesStreamOutput();", "filename": "core/src/main/java/org/elasticsearch/index/store/Store.java", "status": "modified" }, { "diff": "@@ -26,7 +26,7 @@\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.RamUsageEstimator;\n import org.elasticsearch.ElasticsearchException;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.bytes.ReleasablePagedBytesReference;\n@@ -148,7 +148,7 @@ public Translog(TranslogConfig config, TranslogGeneration translogGeneration) th\n super(config.getShardId(), config.getIndexSettings());\n this.config = config;\n if (translogGeneration == null || translogGeneration.translogUUID == null) { // legacy case\n- translogUUID = Strings.randomBase64UUID();\n+ translogUUID = UUIDs.randomBase64UUID();\n } else {\n translogUUID = translogGeneration.translogUUID;\n }", "filename": "core/src/main/java/org/elasticsearch/index/translog/Translog.java", "status": "modified" }, { "diff": "@@ -31,7 +31,7 @@\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.service.ClusterService;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -212,7 +212,7 @@ public void onResponse(InFlightOpsResponse response) {\n actionListener.onResponse(new ShardsSyncedFlushResult(shardId, totalShards, \"[\" + inflight + \"] ongoing operations on primary\"));\n } else {\n // 3. now send the sync request to all the shards\n- String syncId = Strings.base64UUID();\n+ String syncId = UUIDs.base64UUID();\n sendSyncRequests(syncId, activeShards, state, commitIds, shardId, totalShards, actionListener);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/indices/flush/SyncedFlushService.java", "status": "modified" }, { "diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.cluster.metadata.SnapshotId;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.blobstore.BlobContainer;\n import org.elasticsearch.common.blobstore.BlobMetaData;\n import org.elasticsearch.common.blobstore.BlobPath;\n@@ -634,7 +635,7 @@ public String startVerification() {\n // It's readonly - so there is not much we can do here to verify it\n return null;\n } else {\n- String seed = Strings.randomBase64UUID();\n+ String seed = UUIDs.randomBase64UUID();\n byte[] testBytes = Strings.toUTF8Bytes(seed);\n BlobContainer testContainer = blobStore().blobContainer(basePath().add(testBlobPrefix(seed)));\n String blobName = \"master.dat\";", "filename": "core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java", "status": "modified" }, { "diff": "@@ -50,7 +50,7 @@\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n@@ -255,7 +255,7 @@ public ClusterState execute(ClusterState currentState) {\n createIndexService.validateIndexName(renamedIndexName, currentState);\n createIndexService.validateIndexSettings(renamedIndexName, snapshotIndexMetaData.getSettings());\n IndexMetaData.Builder indexMdBuilder = IndexMetaData.builder(snapshotIndexMetaData).state(IndexMetaData.State.OPEN).index(renamedIndexName);\n- indexMdBuilder.settings(Settings.builder().put(snapshotIndexMetaData.getSettings()).put(IndexMetaData.SETTING_INDEX_UUID, Strings.randomBase64UUID()));\n+ indexMdBuilder.settings(Settings.builder().put(snapshotIndexMetaData.getSettings()).put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID()));\n if (!request.includeAliases() && !snapshotIndexMetaData.getAliases().isEmpty()) {\n // Remove all aliases - they shouldn't be restored\n indexMdBuilder.removeAllAliases();", "filename": "core/src/main/java/org/elasticsearch/snapshots/RestoreService.java", "status": "modified" }, { "diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.network.NetworkModule;\n@@ -118,7 +119,7 @@ public static Settings processSettings(Settings settings) {\n // nothing is going to be discovered, since no master will be elected\n sb.put(DiscoverySettings.INITIAL_STATE_TIMEOUT_SETTING.getKey(), 0);\n if (sb.get(\"cluster.name\") == null) {\n- sb.put(\"cluster.name\", \"tribe_\" + Strings.randomBase64UUID()); // make sure it won't join other tribe nodes in the same JVM\n+ sb.put(\"cluster.name\", \"tribe_\" + UUIDs.randomBase64UUID()); // make sure it won't join other tribe nodes in the same JVM\n }\n sb.put(TransportMasterNodeReadAction.FORCE_LOCAL_SETTING.getKey(), true);\n return sb.build();", "filename": "core/src/main/java/org/elasticsearch/tribe/TribeService.java", "status": "modified" }, { "diff": "@@ -22,7 +22,7 @@\n import org.apache.lucene.util.CollectionUtil;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.ImmutableOpenIntMap;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n@@ -57,8 +57,8 @@ public void testBasicSerialization() throws Exception {\n DiscoveryNode node2 = new DiscoveryNode(\"node2\", DummyTransportAddress.INSTANCE, emptyMap(), emptySet(), Version.CURRENT);\n List<IndicesShardStoresResponse.StoreStatus> storeStatusList = new ArrayList<>();\n storeStatusList.add(new IndicesShardStoresResponse.StoreStatus(node1, 3, null, IndicesShardStoresResponse.StoreStatus.AllocationStatus.PRIMARY, null));\n- storeStatusList.add(new IndicesShardStoresResponse.StoreStatus(node2, ShardStateMetaData.NO_VERSION, Strings.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.REPLICA, null));\n- storeStatusList.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, Strings.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.UNUSED, new IOException(\"corrupted\")));\n+ storeStatusList.add(new IndicesShardStoresResponse.StoreStatus(node2, ShardStateMetaData.NO_VERSION, UUIDs.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.REPLICA, null));\n+ storeStatusList.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, UUIDs.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.UNUSED, new IOException(\"corrupted\")));\n storeStatuses.put(0, storeStatusList);\n storeStatuses.put(1, storeStatusList);\n ImmutableOpenIntMap<List<IndicesShardStoresResponse.StoreStatus>> storesMap = storeStatuses.build();\n@@ -124,14 +124,14 @@ public void testBasicSerialization() throws Exception {\n public void testStoreStatusOrdering() throws Exception {\n DiscoveryNode node1 = new DiscoveryNode(\"node1\", DummyTransportAddress.INSTANCE, emptyMap(), emptySet(), Version.CURRENT);\n List<IndicesShardStoresResponse.StoreStatus> orderedStoreStatuses = new ArrayList<>();\n- orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, Strings.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.PRIMARY, null));\n- orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, Strings.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.REPLICA, null));\n- orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, Strings.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.UNUSED, null));\n+ orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, UUIDs.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.PRIMARY, null));\n+ orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, UUIDs.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.REPLICA, null));\n+ orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, UUIDs.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.UNUSED, null));\n orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, 2, null, IndicesShardStoresResponse.StoreStatus.AllocationStatus.PRIMARY, null));\n orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, 1, null, IndicesShardStoresResponse.StoreStatus.AllocationStatus.PRIMARY, null));\n orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, 1, null, IndicesShardStoresResponse.StoreStatus.AllocationStatus.REPLICA, null));\n orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, 1, null, IndicesShardStoresResponse.StoreStatus.AllocationStatus.UNUSED, null));\n- orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, Strings.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.REPLICA, new IOException(\"corrupted\")));\n+ orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, ShardStateMetaData.NO_VERSION, UUIDs.randomBase64UUID(), IndicesShardStoresResponse.StoreStatus.AllocationStatus.REPLICA, new IOException(\"corrupted\")));\n orderedStoreStatuses.add(new IndicesShardStoresResponse.StoreStatus(node1, 3, null, IndicesShardStoresResponse.StoreStatus.AllocationStatus.REPLICA, new IOException(\"corrupted\")));\n \n List<IndicesShardStoresResponse.StoreStatus> storeStatuses = new ArrayList<>(orderedStoreStatuses);", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoreResponseTests.java", "status": "modified" }, { "diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.RoutingTable;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.DummyTransportAddress;\n import org.elasticsearch.test.ESTestCase;\n@@ -49,7 +49,7 @@ public class ClusterChangedEventTests extends ESTestCase {\n private static final ClusterName TEST_CLUSTER_NAME = new ClusterName(\"test\");\n private static final int INDICES_CHANGE_NUM_TESTS = 5;\n private static final String NODE_ID_PREFIX = \"node_\";\n- private static final String INITIAL_CLUSTER_ID = Strings.randomBase64UUID();\n+ private static final String INITIAL_CLUSTER_ID = UUIDs.randomBase64UUID();\n // the initial indices which every cluster state test starts out with\n private static final List<String> initialIndices = Arrays.asList(\"idx1\", \"idx2\", \"idx3\");\n // index settings\n@@ -249,12 +249,12 @@ private static ClusterState nextState(final ClusterState previousState, final bo\n final List<String> addedIndices, final List<String> deletedIndices,\n final int numNodesToRemove) {\n final ClusterState.Builder builder = ClusterState.builder(previousState);\n- builder.stateUUID(Strings.randomBase64UUID());\n+ builder.stateUUID(UUIDs.randomBase64UUID());\n final MetaData.Builder metaBuilder = MetaData.builder(previousState.metaData());\n if (changeClusterUUID || addedIndices.size() > 0 || deletedIndices.size() > 0) {\n // there is some change in metadata cluster state\n if (changeClusterUUID) {\n- metaBuilder.clusterUUID(Strings.randomBase64UUID());\n+ metaBuilder.clusterUUID(UUIDs.randomBase64UUID());\n }\n for (String index : addedIndices) {\n metaBuilder.put(createIndexMetadata(index), true);", "filename": "core/src/test/java/org/elasticsearch/cluster/ClusterChangedEventTests.java", "status": "modified" }, { "diff": "@@ -38,7 +38,7 @@\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.TestShardRouting;\n import org.elasticsearch.cluster.routing.UnassignedInfo;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -518,7 +518,7 @@ public IndexMetaData randomChange(IndexMetaData part) {\n }\n break;\n case 2:\n- builder.settings(Settings.builder().put(part.getSettings()).put(IndexMetaData.SETTING_INDEX_UUID, Strings.randomBase64UUID()));\n+ builder.settings(Settings.builder().put(part.getSettings()).put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID()));\n break;\n default:\n throw new IllegalArgumentException(\"Shouldn't be here\");\n@@ -672,6 +672,6 @@ public ClusterState.Custom randomChange(ClusterState.Custom part) {\n * Generates a random name that starts with the given prefix\n */\n private String randomName(String prefix) {\n- return prefix + Strings.randomBase64UUID(random());\n+ return prefix + UUIDs.randomBase64UUID(random());\n }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/ClusterStateDiffIT.java", "status": "modified" }, { "diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -133,7 +133,7 @@ public void testLargeClusterStatePublishing() throws Exception {\n int counter = 0;\n int numberOfFields = 0;\n while (true) {\n- mapping.startObject(Strings.randomBase64UUID()).field(\"type\", \"text\").endObject();\n+ mapping.startObject(UUIDs.randomBase64UUID()).field(\"type\", \"text\").endObject();\n counter += 10; // each field is about 10 bytes, assuming compression in place\n numberOfFields++;\n if (counter > estimatedBytesSize) {", "filename": "core/src/test/java/org/elasticsearch/cluster/SimpleClusterStateIT.java", "status": "modified" }, { "diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.bwcompat.OldIndexBackwardsCompatibilityIT;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.routing.AllocationId;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.settings.Settings;\n@@ -70,7 +70,7 @@ public void testUpgradeCustomDataPath() throws IOException {\n .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean())\n .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), customPath.toAbsolutePath().toString()).build();\n try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n- final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ final Index index = new Index(randomAsciiOfLength(10), UUIDs.randomBase64UUID());\n Settings settings = Settings.builder()\n .put(nodeSettings)\n .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n@@ -99,7 +99,7 @@ public void testPartialUpgradeCustomDataPath() throws IOException {\n .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean())\n .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), customPath.toAbsolutePath().toString()).build();\n try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n- final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ final Index index = new Index(randomAsciiOfLength(10), UUIDs.randomBase64UUID());\n Settings settings = Settings.builder()\n .put(nodeSettings)\n .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n@@ -138,7 +138,7 @@ public void testUpgrade() throws IOException {\n final Settings nodeSettings = Settings.builder()\n .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean()).build();\n try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n- final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ final Index index = new Index(randomAsciiOfLength(10), UUIDs.randomBase64UUID());\n Settings settings = Settings.builder()\n .put(nodeSettings)\n .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n@@ -163,7 +163,7 @@ public void testUpgradeIndices() throws IOException {\n try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n Map<IndexSettings, Tuple<Integer, Integer>> indexSettingsMap = new HashMap<>();\n for (int i = 0; i < randomIntBetween(2, 5); i++) {\n- final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ final Index index = new Index(randomAsciiOfLength(10), UUIDs.randomBase64UUID());\n Settings settings = Settings.builder()\n .put(nodeSettings)\n .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n@@ -247,7 +247,7 @@ public void testUpgradeRealIndex() throws IOException, URISyntaxException {\n }\n \n public void testNeedsUpgrade() throws IOException {\n- final Index index = new Index(\"foo\", Strings.randomBase64UUID());\n+ final Index index = new Index(\"foo\", UUIDs.randomBase64UUID());\n IndexMetaData indexState = IndexMetaData.builder(index.getName())\n .settings(Settings.builder()\n .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())", "filename": "core/src/test/java/org/elasticsearch/common/util/IndexFolderUpgraderTests.java", "status": "modified" }, { "diff": "@@ -35,7 +35,7 @@\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.index.shard.ShardId;\n@@ -184,8 +184,8 @@ public void testFoundAllocationAndAllocating() {\n * Tests that when there was a node that previously had the primary, it will be allocated to that same node again.\n */\n public void testPreferAllocatingPreviousPrimary() {\n- String primaryAllocId = Strings.randomBase64UUID();\n- String replicaAllocId = Strings.randomBase64UUID();\n+ String primaryAllocId = UUIDs.randomBase64UUID();\n+ String replicaAllocId = UUIDs.randomBase64UUID();\n RoutingAllocation allocation = routingAllocationWithOnePrimaryNoReplicas(yesAllocationDeciders(), false, randomFrom(Version.V_2_0_0, Version.CURRENT), primaryAllocId, replicaAllocId);\n boolean node1HasPrimaryShard = randomBoolean();\n testAllocator.addData(node1, ShardStateMetaData.NO_VERSION, node1HasPrimaryShard ? primaryAllocId : replicaAllocId, node1HasPrimaryShard);", "filename": "core/src/test/java/org/elasticsearch/gateway/PrimaryShardAllocatorTests.java", "status": "modified" }, { "diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.index;\n \n import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.test.ESTestCase;\n \n import static org.apache.lucene.util.TestUtil.randomSimpleString;\n@@ -33,7 +33,7 @@ public void testToString() {\n assertEquals(\"[name]\", new Index(\"name\", ClusterState.UNKNOWN_UUID).toString());\n \n Index random = new Index(randomSimpleString(random(), 1, 100),\n- usually() ? Strings.randomBase64UUID(random()) : ClusterState.UNKNOWN_UUID);\n+ usually() ? UUIDs.randomBase64UUID(random()) : ClusterState.UNKNOWN_UUID);\n assertThat(random.toString(), containsString(random.getName()));\n if (ClusterState.UNKNOWN_UUID.equals(random.getUUID())) {\n assertThat(random.toString(), not(containsString(random.getUUID())));", "filename": "core/src/test/java/org/elasticsearch/index/IndexTests.java", "status": "modified" }, { "diff": "@@ -20,10 +20,6 @@\n \n import org.apache.lucene.analysis.MockAnalyzer;\n import org.apache.lucene.codecs.CodecUtil;\n-import org.apache.lucene.codecs.FilterCodec;\n-import org.apache.lucene.codecs.SegmentInfoFormat;\n-import org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat;\n-import org.apache.lucene.codecs.lucene54.Lucene54Codec;\n import org.apache.lucene.document.Document;\n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.SortedDocValuesField;\n@@ -40,7 +36,6 @@\n import org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy;\n import org.apache.lucene.index.NoDeletionPolicy;\n import org.apache.lucene.index.NoMergePolicy;\n-import org.apache.lucene.index.SegmentInfo;\n import org.apache.lucene.index.SegmentInfos;\n import org.apache.lucene.index.SnapshotDeletionPolicy;\n import org.apache.lucene.index.Term;\n@@ -59,7 +54,7 @@\n import org.apache.lucene.util.Version;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.io.stream.InputStreamStreamInput;\n import org.elasticsearch.common.io.stream.OutputStreamStreamOutput;\n import org.elasticsearch.common.lucene.Lucene;\n@@ -91,11 +86,8 @@\n import java.util.List;\n import java.util.Map;\n import java.util.Random;\n-import java.util.Set;\n import java.util.concurrent.atomic.AtomicInteger;\n-import java.util.zip.Adler32;\n \n-import static java.util.Collections.emptyMap;\n import static java.util.Collections.unmodifiableMap;\n import static org.elasticsearch.test.VersionUtils.randomVersion;\n import static org.hamcrest.Matchers.empty;\n@@ -1080,7 +1072,7 @@ public Directory newDirectory() throws IOException {\n Store store = new Store(shardId, INDEX_SETTINGS, directoryService, new DummyShardLock(shardId));\n \n CorruptIndexException exception = new CorruptIndexException(\"foo\", \"bar\");\n- String uuid = Store.CORRUPTED + Strings.randomBase64UUID();\n+ String uuid = Store.CORRUPTED + UUIDs.randomBase64UUID();\n try (IndexOutput output = dir.createOutput(uuid, IOContext.DEFAULT)) {\n CodecUtil.writeHeader(output, Store.CODEC, Store.VERSION_STACK_TRACE);\n output.writeString(ExceptionsHelper.detailedMessage(exception, true, 0));", "filename": "core/src/test/java/org/elasticsearch/index/store/StoreTests.java", "status": "modified" }, { "diff": "@@ -22,7 +22,7 @@\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.service.ClusterService;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n@@ -54,7 +54,7 @@ public void testModificationPreventsFlushing() throws InterruptedException {\n Map<String, Engine.CommitId> commitIds = SyncedFlushUtil.sendPreSyncRequests(flushService, activeShards, state, shardId);\n assertEquals(\"exactly one commit id\", 1, commitIds.size());\n client().prepareIndex(\"test\", \"test\", \"2\").setSource(\"{}\").get();\n- String syncId = Strings.base64UUID();\n+ String syncId = UUIDs.base64UUID();\n SyncedFlushUtil.LatchedListener<ShardsSyncedFlushResult> listener = new SyncedFlushUtil.LatchedListener<>();\n flushService.sendSyncRequests(syncId, activeShards, state, commitIds, shardId, shardRoutingTable.size(), listener);\n listener.latch.await();\n@@ -174,7 +174,7 @@ public void testFailAfterIntermediateCommit() throws InterruptedException {\n client().prepareIndex(\"test\", \"test\", \"2\").setSource(\"{}\").get();\n }\n client().admin().indices().prepareFlush(\"test\").setForce(true).get();\n- String syncId = Strings.base64UUID();\n+ String syncId = UUIDs.base64UUID();\n final SyncedFlushUtil.LatchedListener<ShardsSyncedFlushResult> listener = new SyncedFlushUtil.LatchedListener();\n flushService.sendSyncRequests(syncId, activeShards, state, commitIds, shardId, shardRoutingTable.size(), listener);\n listener.latch.await();\n@@ -204,7 +204,7 @@ public void testFailWhenCommitIsMissing() throws InterruptedException {\n Map<String, Engine.CommitId> commitIds = SyncedFlushUtil.sendPreSyncRequests(flushService, activeShards, state, shardId);\n assertEquals(\"exactly one commit id\", 1, commitIds.size());\n commitIds.clear(); // wipe it...\n- String syncId = Strings.base64UUID();\n+ String syncId = UUIDs.base64UUID();\n SyncedFlushUtil.LatchedListener<ShardsSyncedFlushResult> listener = new SyncedFlushUtil.LatchedListener();\n flushService.sendSyncRequests(syncId, activeShards, state, commitIds, shardId, shardRoutingTable.size(), listener);\n listener.latch.await();", "filename": "core/src/test/java/org/elasticsearch/indices/flush/SyncedFlushSingleNodeTests.java", "status": "modified" }, { "diff": "@@ -24,7 +24,7 @@\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.text.Text;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.SearchContextException;\n@@ -193,7 +193,7 @@ public void testWithSimpleTypes() throws Exception {\n break;\n }\n }\n- values.add(new Text(Strings.randomBase64UUID()));\n+ values.add(new Text(UUIDs.randomBase64UUID()));\n documents.add(values);\n }\n int reqSize = randomInt(NUM_DOCS-1);", "filename": "core/src/test/java/org/elasticsearch/search/searchafter/SearchAfterIT.java", "status": "modified" }, { "diff": "@@ -31,7 +31,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.common.Priority;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.network.NetworkModule;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.TransportAddress;\n@@ -95,7 +95,7 @@ public Settings transportClientSettings() {\n \n };\n cluster2 = new InternalTestCluster(InternalTestCluster.configuredNodeMode(), randomLong(), createTempDir(), 2, 2,\n- Strings.randomBase64UUID(random()), nodeConfigurationSource, 0, false, SECOND_CLUSTER_NODE_PREFIX, Collections.emptyList(), Function.identity());\n+ UUIDs.randomBase64UUID(random()), nodeConfigurationSource, 0, false, SECOND_CLUSTER_NODE_PREFIX, Collections.emptyList(), Function.identity());\n \n cluster2.beforeTest(random(), 0.1);\n cluster2.ensureAtLeastNumDataNodes(2);", "filename": "core/src/test/java/org/elasticsearch/tribe/TribeIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.2.0\n\n**JVM version**: 1.8.0_40\n\n**OS version**: OSX and Ubuntu 15.04\n\n**Description of the problem including expected versus actual behavior**:\nUnless I'm completely and utterly blind (which is a distinct possibility), it seems that both indexing and query slowlogs no longer include the name of the index that the operation was running against. This makes them almost entirely useless for us (since we have several hundred indices in our cluster).\n\n**Steps to reproduce**:\n1. Generate some queries / indexing requests that triggers a slowlog.\n2. Look at the slowlog\n3. Scratch your head and wonder why the `type` is present, but not the `index`.\n\n**Provide logs (if relevant)**:\n\nAn example slowlog from 2.2.0:\n\n```\n[2016-03-09 16:26:15,225][TRACE][index.indexing.slowlog.index] took[4.4ms], took_millis[4], type[test], id[AVNaDew0JKS5u4ProrcX], routing[] , source[{\"name\":\"vlucas/phpdotenv\",\"version\":\"1.0.3\",\"type\":\"library\",\"description\":\"Loads environment variables from `.env` to `getenv()`, `$_ENV` and `$_SERVER` automagically.\",\"keywords\":[\"env\",\"dotenv\",\"environment\"],\"homepage\":\"http://github.com/vlucas/phpdotenv\",\"license\":\"BSD\",\"authors\":[{\"name\":\"Vance Lucas\",\"email\":\"vance@vancelucas.com\",\"homepage\":\"http://www.vancelucas.com\"}],\"require\":{\"php\":\">=5.3.2\"},\"require-dev\":{\"phpunit/phpunit\":\"*\"},\"autoload\":{\"psr-0\":{\"Dotenv\":\"src/\"}}}]\n```\n\nAn example slowlog from 1.7.5:\n\n```\n[2016-03-09 16:31:42,346][WARN ][index.indexing.slowlog.index] [Prowler] [slowtest][1] took[95.7ms], took_millis[95], type[test], id[AVNaEumoVIxyp_lAGADK], routing[], source[{\"name\":\"vlucas/phpdotenv\",\"version\":\"1.0.3\",\"type\":\"library\",\"description\":\"Loads environment variables from `.env` to `getenv()`, `$_ENV` and `$_SERVER` automagically.\",\"keywords\":[\"env\",\"dotenv\",\"environment\"],\"homepage\":\"http://github.com/vlucas/phpdotenv\",\"license\":\"BSD\",\"authors\":[{\"name\":\"Vance Lucas\",\"email\":\"vance@vancelucas.com\",\"homepage\":\"http://www.vancelucas.com\"}],\"require\":{\"php\":\">=5.3.2\"},\"require-dev\":{\"phpunit/phpunit\":\"*\"},\"autoload\":{\"psr-0\":{\"Dotenv\":\"src/\"}}}]\n```\n", "comments": [ { "body": "I am looking into this - thanks for reporting\n", "created_at": "2016-03-09T08:33:48Z" }, { "body": "this will be in the upcoming 2.2.1 release - thanks for reporting\n", "created_at": "2016-03-09T09:12:51Z" }, { "body": ":+1: thanks for the über fast response! :)\n", "created_at": "2016-03-09T11:52:21Z" }, { "body": "@s1monw we need to re-open this one, 2.2.1 still has no index name logged in _query_ slowlogs. By the looks of things your commit fixed indexing slowlogs, but I see no mention of query slowlogs.\n", "created_at": "2016-04-06T04:47:04Z" }, { "body": "Need to make the same change for query slowlogs.\n", "created_at": "2016-04-06T11:45:16Z" } ], "number": 17025, "title": "Slowlogs no longer include index name" }
{ "body": "This commits adds the index name as part of the logging message.\nCloses #17025\n", "number": 17818, "review_comments": [ { "body": "Just checking - doesn't SearchContext already give you access to index via searchContext.indexShard().shardId()?\nI haven't tried running it but looking at SearchContext code it looks like there's already a path to this info?\n", "created_at": "2016-04-18T10:48:19Z" } ], "title": "Add missing index name to search slow log." }
{ "commits": [ { "message": "Add missing index name to search slow log.\nThis commits adds the index name as part of the logging message.\nCloses #17025" } ], "files": [ { "diff": "@@ -33,7 +33,7 @@\n /**\n */\n public final class SearchSlowLog implements SearchOperationListener {\n-\n+ private final Index index;\n private boolean reformat;\n \n private long queryWarnThreshold;\n@@ -87,6 +87,8 @@ public SearchSlowLog(IndexSettings indexSettings) {\n this.queryLogger = Loggers.getLogger(INDEX_SEARCH_SLOWLOG_PREFIX + \".query\");\n this.fetchLogger = Loggers.getLogger(INDEX_SEARCH_SLOWLOG_PREFIX + \".fetch\");\n \n+ this.index = indexSettings.getIndex();\n+\n indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_SEARCH_SLOWLOG_REFORMAT, this::setReformat);\n this.reformat = indexSettings.getValue(INDEX_SEARCH_SLOWLOG_REFORMAT);\n \n@@ -120,43 +122,46 @@ private void setLevel(SlowLogLevel level) {\n @Override\n public void onQueryPhase(SearchContext context, long tookInNanos) {\n if (queryWarnThreshold >= 0 && tookInNanos > queryWarnThreshold) {\n- queryLogger.warn(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n+ queryLogger.warn(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n } else if (queryInfoThreshold >= 0 && tookInNanos > queryInfoThreshold) {\n- queryLogger.info(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n+ queryLogger.info(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n } else if (queryDebugThreshold >= 0 && tookInNanos > queryDebugThreshold) {\n- queryLogger.debug(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n+ queryLogger.debug(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n } else if (queryTraceThreshold >= 0 && tookInNanos > queryTraceThreshold) {\n- queryLogger.trace(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n+ queryLogger.trace(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n }\n }\n \n @Override\n public void onFetchPhase(SearchContext context, long tookInNanos) {\n if (fetchWarnThreshold >= 0 && tookInNanos > fetchWarnThreshold) {\n- fetchLogger.warn(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n+ fetchLogger.warn(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n } else if (fetchInfoThreshold >= 0 && tookInNanos > fetchInfoThreshold) {\n- fetchLogger.info(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n+ fetchLogger.info(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n } else if (fetchDebugThreshold >= 0 && tookInNanos > fetchDebugThreshold) {\n- fetchLogger.debug(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n+ fetchLogger.debug(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n } else if (fetchTraceThreshold >= 0 && tookInNanos > fetchTraceThreshold) {\n- fetchLogger.trace(\"{}\", new SlowLogSearchContextPrinter(context, tookInNanos, reformat));\n+ fetchLogger.trace(\"{}\", new SlowLogSearchContextPrinter(index, context, tookInNanos, reformat));\n }\n }\n \n- private static class SlowLogSearchContextPrinter {\n+ static final class SlowLogSearchContextPrinter {\n private final SearchContext context;\n+ private final Index index;\n private final long tookInNanos;\n private final boolean reformat;\n \n- public SlowLogSearchContextPrinter(SearchContext context, long tookInNanos, boolean reformat) {\n+ public SlowLogSearchContextPrinter(Index index, SearchContext context, long tookInNanos, boolean reformat) {\n this.context = context;\n+ this.index = index;\n this.tookInNanos = tookInNanos;\n this.reformat = reformat;\n }\n \n @Override\n public String toString() {\n StringBuilder sb = new StringBuilder();\n+ sb.append(index).append(\" \");\n sb.append(\"took[\").append(TimeValue.timeValueNanos(tookInNanos)).append(\"], took_millis[\").append(TimeUnit.NANOSECONDS.toMillis(tookInNanos)).append(\"], \");\n if (context.getQueryShardContext().getTypes() == null) {\n sb.append(\"types[], \");", "filename": "core/src/main/java/org/elasticsearch/index/SearchSlowLog.java", "status": "modified" }, { "diff": "@@ -20,13 +20,126 @@\n package org.elasticsearch.index;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.cache.recycler.PageCacheRecycler;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.test.ESTestCase;\n-\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.script.Template;\n+import org.elasticsearch.search.Scroll;\n+import org.elasticsearch.search.builder.SearchSourceBuilder;\n+import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.ShardSearchRequest;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n+import org.elasticsearch.test.TestSearchContext;\n+import org.elasticsearch.threadpool.ThreadPool;\n+\n+import java.io.IOException;\n+\n+import static org.hamcrest.Matchers.startsWith;\n+\n+\n+public class SearchSlowLogTests extends ESSingleNodeTestCase {\n+ @Override\n+ protected SearchContext createSearchContext(IndexService indexService) {\n+ BigArrays bigArrays = indexService.getBigArrays();\n+ ThreadPool threadPool = indexService.getThreadPool();\n+ PageCacheRecycler pageCacheRecycler = node().injector().getInstance(PageCacheRecycler.class);\n+ ScriptService scriptService = node().injector().getInstance(ScriptService.class);\n+ return new TestSearchContext(threadPool, pageCacheRecycler, bigArrays, scriptService, indexService) {\n+ @Override\n+ public ShardSearchRequest request() {\n+ return new ShardSearchRequest() {\n+ @Override\n+ public ShardId shardId() {\n+ return null;\n+ }\n+\n+ @Override\n+ public String[] types() {\n+ return new String[0];\n+ }\n+\n+ @Override\n+ public SearchSourceBuilder source() {\n+ return null;\n+ }\n+\n+ @Override\n+ public void source(SearchSourceBuilder source) {\n+\n+ }\n+\n+ @Override\n+ public int numberOfShards() {\n+ return 0;\n+ }\n+\n+ @Override\n+ public SearchType searchType() {\n+ return null;\n+ }\n+\n+ @Override\n+ public String[] filteringAliases() {\n+ return new String[0];\n+ }\n+\n+ @Override\n+ public long nowInMillis() {\n+ return 0;\n+ }\n+\n+ @Override\n+ public Template template() {\n+ return null;\n+ }\n+\n+ @Override\n+ public Boolean requestCache() {\n+ return null;\n+ }\n+\n+ @Override\n+ public Scroll scroll() {\n+ return null;\n+ }\n+\n+ @Override\n+ public void setProfile(boolean profile) {\n+\n+ }\n+\n+ @Override\n+ public boolean isProfile() {\n+ return false;\n+ }\n+\n+ @Override\n+ public BytesReference cacheKey() throws IOException {\n+ return null;\n+ }\n+\n+ @Override\n+ public void rewrite(QueryShardContext context) throws IOException {\n+ }\n+ };\n+ }\n+ };\n+ }\n \n-public class SearchSlowLogTests extends ESTestCase {\n+ public void testSlowLogSearchContextPrinterToLog() throws IOException {\n+ IndexService index = createIndex(\"foo\");\n+ // Turning off document logging doesn't log source[]\n+ SearchContext searchContext = createSearchContext(index);\n+ SearchSlowLog.SlowLogSearchContextPrinter p = new SearchSlowLog.SlowLogSearchContextPrinter(index.index(), searchContext, 10, true);\n+ assertThat(p.toString(), startsWith(index.index().toString()));\n+ }\n \n public void testReformatSetting() {\n IndexMetaData metaData = newIndexMeta(\"index\", Settings.builder()", "filename": "core/src/test/java/org/elasticsearch/index/SearchSlowLogTests.java", "status": "modified" } ] }
{ "body": "I have an index `dlstats` that has a single shard with one replica, in a two-node cluster:\n\n```\nindex shard prirep state docs store ip node\ndlstats 0 p STARTED 21 265.6kb 10.0.0.5 Donald Pierce\ndlstats 0 r STARTED 21 265.6kb 10.0.0.5 Peepers\n```\n\nWhen looking at the (blank) termvectors for a doc, I get alternating `took` times. It seems to be an issue with round-robining the request. If I set `_preference:_primary` it still does it:\n\n```\n% for i in `seq 10`; do curl -s foo:secret@localhost:9200/dlstats/blob/2012-12-17/_termvectors\\?_preference:_primary\\&format=yaml | grep -F took; done\ntook: 1\ntook: 1438281967692\ntook: 1\ntook: 1438281967750\ntook: 1\ntook: 1438281967802\ntook: 1\ntook: 1438281967840\ntook: 1\ntook: 1438281967875\n```\n\nBut if I actually turn replicas off, it does not:\n\n```\nindex shard prirep state docs store ip node\ndlstats 0 p STARTED 21 265.6kb 10.0.0.5 Donald Pierce\n```\n\n```\n% for i in `seq 10`; do curl -s foo:secret@localhost:9200/dlstats/blob/2012-12-17/_termvectors\\?_preference:_primary\\&format=yaml | grep -F took; done\ntook: 1\ntook: 1\ntook: 1\ntook: 1\ntook: 1\ntook: 1\ntook: 1\ntook: 1\ntook: 1\ntook: 1\n```\n\nAdding a replica back, does the same thing again:\n\n```\n% for i in `seq 10`; do curl -s foo:secret@localhost:9200/dlstats/blob/2012-12-17/_termvectors\\?_preference:_primary\\&format=yaml | grep -F took; done\ntook: 1\ntook: 1438282196274\ntook: 1\ntook: 1438282196304\ntook: 1\ntook: 1438282196339\ntook: 1\ntook: 1438282196362\ntook: 1\ntook: 1438282196402\n```\n\nThen if I add another node and another replica, the funny `took` seems to happen per replica:\n\n```\n% for i in `seq 10`; do curl -s foo:secret@localhost:9200/dlstats/blob/2012-12-17/_termvectors\\?_preference:_primary\\&format=yaml | grep -F took; done\ntook: 1438282274784\ntook: 1438282274796\ntook: 1\ntook: 1438282274828\ntook: 1438282274841\ntook: 1\ntook: 1438282274873\ntook: 1438282274885\ntook: 1\ntook: 1438282274912\n```\n\nSo maybe there is a bug there and in `*TermVectors*` handling of `_preference`?\n\nES version:\n\n```\n{\n \"name\" : \"Donald Pierce\",\n \"cluster_name\" : \"elasticsearch\",\n \"version\" : {\n \"number\" : \"2.0.0-beta1\",\n \"build_hash\" : \"c315d54c2a6695301512fecdccb8f7de22e3ccfe\",\n \"build_timestamp\" : \"2015-07-16T19:42:13Z\",\n \"build_snapshot\" : true,\n \"lucene_version\" : \"5.2.1\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n```\n", "comments": [ { "body": "Closed by https://github.com/elastic/elasticsearch/pull/17817\n", "created_at": "2016-05-12T11:57:45Z" }, { "body": "Hi! \nWe are having the same problem in the 2.3.3 version. Is there any posibility of porting #17817 to the 2.x branch?\n", "created_at": "2016-10-07T09:04:04Z" }, { "body": "As 2.x reached end of life, and we are not going to backport this improvement, I am closing this issue.", "created_at": "2018-03-21T17:52:40Z" } ], "number": 12565, "title": "_termvectors `took` parameter alternating between small and large number" }
{ "body": "Fix for #12565\n", "number": 17817, "review_comments": [ { "body": "It's better to use a relative time source like `System#nanoTime` to calculate the length of time instead of an absolute time source (which is subject to crazy things like NTP adjustments, DST modifications, or just end-user changes).\n", "created_at": "2016-04-18T10:28:01Z" }, { "body": "The `Math#max` is unnecessary now as `System#nanoTime` will not go backwards.\n", "created_at": "2016-04-19T02:26:02Z" }, { "body": "This test doesn't protect against insidious bugs. For example, if we modify:\n\n``` java\ntermVectorsResponse.setTookInMillis(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startTime));\n```\n\nto\n\n``` java\ntermVectorsResponse.setTookInMillis((System.nanoTime() - startTime) / 10000000L);\n```\n\nthen we've introduced a unit conversion bug but the test will not capture that because the test will still pass.\n", "created_at": "2016-04-19T15:25:14Z" }, { "body": "Can we make these values random?\n", "created_at": "2016-04-20T18:36:27Z" } ], "title": "Fix calculation of took time of term vectors request" }
{ "commits": [ { "message": "Fix calculation of took time of term vectors request" }, { "message": "Replace currentTimeMillis with nanoTime" }, { "message": "Remove Math#max" }, { "message": "Improving test for took time" }, { "message": "Random start and end time" } ], "files": [ { "diff": "@@ -133,8 +133,6 @@ public void writeTo(StreamOutput out) throws IOException {\n private EnumSet<Flag> flagsEnum = EnumSet.of(Flag.Positions, Flag.Offsets, Flag.Payloads,\n Flag.FieldStatistics);\n \n- long startTime;\n-\n public TermVectorsRequest() {\n }\n \n@@ -174,7 +172,6 @@ public TermVectorsRequest(TermVectorsRequest other) {\n this.realtime = other.realtime();\n this.version = other.version();\n this.versionType = VersionType.fromValue(other.versionType().getValue());\n- this.startTime = other.startTime();\n this.filterSettings = other.filterSettings();\n }\n \n@@ -463,10 +460,6 @@ private void setFlag(Flag flag, boolean set) {\n }\n }\n \n- public long startTime() {\n- return this.startTime;\n- }\n-\n @Override\n public ActionRequestValidationException validate() {\n ActionRequestValidationException validationException = super.validateNonNullIndex();", "filename": "core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java", "status": "modified" }, { "diff": "@@ -326,8 +326,8 @@ private void buildFieldStatistics(XContentBuilder builder, Terms curTerms) throw\n }\n }\n \n- public void updateTookInMillis(long startTime) {\n- this.tookInMillis = Math.max(1, System.currentTimeMillis() - startTime);\n+ public void setTookInMillis(long tookInMillis) {\n+ this.tookInMillis = tookInMillis;\n }\n \n public TimeValue getTook() {\n@@ -337,7 +337,7 @@ public TimeValue getTook() {\n public long getTookInMillis() {\n return tookInMillis;\n }\n- \n+\n private void buildScore(XContentBuilder builder, BoostAttribute boostAtt) throws IOException {\n if (hasScores) {\n builder.field(FieldStrings.SCORE, boostAtt.getBoost());\n@@ -347,7 +347,7 @@ private void buildScore(XContentBuilder builder, BoostAttribute boostAtt) throws\n public boolean isExists() {\n return exists;\n }\n- \n+\n public void setExists(boolean exists) {\n this.exists = exists;\n }", "filename": "core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java", "status": "modified" }, { "diff": "@@ -64,7 +64,6 @@ protected void doExecute(final MultiTermVectorsRequest request, final ActionList\n Map<ShardId, MultiTermVectorsShardRequest> shardRequests = new HashMap<>();\n for (int i = 0; i < request.requests.size(); i++) {\n TermVectorsRequest termVectorsRequest = request.requests.get(i);\n- termVectorsRequest.startTime = System.currentTimeMillis();\n termVectorsRequest.routing(clusterState.metaData().resolveIndexRouting(termVectorsRequest.parent(), termVectorsRequest.routing(), termVectorsRequest.index()));\n if (!clusterState.metaData().hasConcreteIndex(termVectorsRequest.index())) {\n responses.set(i, new MultiTermVectorsItemResponse(null, new MultiTermVectorsResponse.Failure(termVectorsRequest.index(),", "filename": "core/src/main/java/org/elasticsearch/action/termvectors/TransportMultiTermVectorsAction.java", "status": "modified" }, { "diff": "@@ -82,7 +82,6 @@ protected MultiTermVectorsShardResponse shardOperation(MultiTermVectorsShardRequ\n TermVectorsRequest termVectorsRequest = request.requests.get(i);\n try {\n TermVectorsResponse termVectorsResponse = TermVectorsService.getTermVectors(indexShard, termVectorsRequest);\n- termVectorsResponse.updateTookInMillis(termVectorsRequest.startTime());\n response.add(request.locations.get(i), termVectorsResponse);\n } catch (Throwable t) {\n if (TransportActions.isShardNotAvailableException(t)) {", "filename": "core/src/main/java/org/elasticsearch/action/termvectors/TransportShardMultiTermsVectorAction.java", "status": "modified" }, { "diff": "@@ -44,12 +44,6 @@ public class TransportTermVectorsAction extends TransportSingleShardAction<TermV\n \n private final IndicesService indicesService;\n \n- @Override\n- protected void doExecute(TermVectorsRequest request, ActionListener<TermVectorsResponse> listener) {\n- request.startTime = System.currentTimeMillis();\n- super.doExecute(request, listener);\n- }\n-\n @Inject\n public TransportTermVectorsAction(Settings settings, ClusterService clusterService, TransportService transportService,\n IndicesService indicesService, ThreadPool threadPool, ActionFilters actionFilters,\n@@ -85,9 +79,7 @@ protected void resolveRequest(ClusterState state, InternalRequest request) {\n protected TermVectorsResponse shardOperation(TermVectorsRequest request, ShardId shardId) {\n IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex());\n IndexShard indexShard = indexService.getShard(shardId.id());\n- TermVectorsResponse response = TermVectorsService.getTermVectors(indexShard, request);\n- response.updateTookInMillis(request.startTime());\n- return response;\n+ return TermVectorsService.getTermVectors(indexShard, request);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/termvectors/TransportTermVectorsAction.java", "status": "modified" }, { "diff": "@@ -60,6 +60,8 @@\n import java.util.Map;\n import java.util.Set;\n import java.util.TreeMap;\n+import java.util.concurrent.TimeUnit;\n+import java.util.function.LongSupplier;\n \n import static org.elasticsearch.index.mapper.SourceToParse.source;\n \n@@ -72,6 +74,11 @@ public class TermVectorsService {\n private TermVectorsService() {}\n \n public static TermVectorsResponse getTermVectors(IndexShard indexShard, TermVectorsRequest request) {\n+ return getTermVectors(indexShard, request, System::nanoTime);\n+ }\n+\n+ static TermVectorsResponse getTermVectors(IndexShard indexShard, TermVectorsRequest request, LongSupplier nanoTimeSupplier) {\n+ final long startTime = nanoTimeSupplier.getAsLong();\n final TermVectorsResponse termVectorsResponse = new TermVectorsResponse(indexShard.shardId().getIndex().getName(), request.type(), request.id());\n final Term uidTerm = new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(request.type(), request.id()));\n \n@@ -141,6 +148,7 @@ else if (docIdAndVersion != null) {\n // write term vectors\n termVectorsResponse.setFields(termVectorsByField, request.selectedFields(), request.getFlags(), topLevelFields, dfs, termVectorsFilter);\n }\n+ termVectorsResponse.setTookInMillis(TimeUnit.NANOSECONDS.toMillis(nanoTimeSupplier.getAsLong() - startTime));\n } catch (Throwable ex) {\n throw new ElasticsearchException(\"failed to execute term vector request\", ex);\n } finally {", "filename": "core/src/main/java/org/elasticsearch/index/termvectors/TermVectorsService.java", "status": "modified" }, { "diff": "@@ -0,0 +1,73 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.termvectors;\n+\n+import org.elasticsearch.action.termvectors.TermVectorsRequest;\n+import org.elasticsearch.action.termvectors.TermVectorsResponse;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n+\n+import java.util.List;\n+import java.util.concurrent.TimeUnit;\n+import java.util.stream.Stream;\n+\n+import static java.lang.Math.abs;\n+import static java.util.stream.Collectors.toList;\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.notNullValue;\n+\n+public class TermVectorsServiceTests extends ESSingleNodeTestCase {\n+\n+ public void testTook() throws Exception {\n+ XContentBuilder mapping = jsonBuilder()\n+ .startObject()\n+ .startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", \"text\")\n+ .field(\"term_vector\", \"with_positions_offsets_payloads\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ createIndex(\"test\", Settings.EMPTY, \"type1\", mapping);\n+ ensureGreen();\n+\n+ client().prepareIndex(\"test\", \"type1\", \"0\").setSource(\"field\", \"foo bar\").setRefresh(true).execute().get();\n+\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(resolveIndex(\"test\"));\n+ IndexShard shard = test.getShardOrNull(0);\n+ assertThat(shard, notNullValue());\n+\n+ List<Long> longs = Stream.of(abs(randomLong()), abs(randomLong())).sorted().collect(toList());\n+\n+ TermVectorsRequest request = new TermVectorsRequest(\"test\", \"type1\", \"0\");\n+ TermVectorsResponse response = TermVectorsService.getTermVectors(shard, request, longs.iterator()::next);\n+\n+ assertThat(response, notNullValue());\n+ assertThat(response.getTookInMillis(), equalTo(TimeUnit.NANOSECONDS.toMillis(longs.get(1) - longs.get(0))));\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/index/termvectors/TermVectorsServiceTests.java", "status": "added" } ] }
{ "body": "I'm trying to get `inner_hits` to work on an a two level deep `has_child` query, but the grandchild hits just seem to return an empty array. If I specify the inner hits at the top level I get the results I want, but its rather too slow. Using inner hits on the has child query seems much faster, but doesn't return the grandchild hits.\n\nBelow is an example query:\n\n``` json\n{\n \"from\" : 0,\n \"size\" : 25,\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"filtered\" : {\n \"query\" : {\n \"multi_match\" : {\n \"query\" : \"asia\",\n \"fields\" : [ \"_search\" ],\n \"operator\" : \"AND\",\n \"analyzer\" : \"library_synonyms\",\n \"fuzziness\" : \"1\"\n }\n },\n \"filter\" : {\n \"and\" : {\n \"filters\" : [ {\n \"terms\" : {\n \"range\" : [ \"Global\" ]\n }\n } ]\n }\n }\n }\n },\n \"child_type\" : \"document-ref\",\n \"inner_hits\" : {\n \"name\" : \"document-ref\"\n }\n }\n },\n \"child_type\" : \"class\",\n \"inner_hits\" : {\n \"size\" : 1000,\n \"_source\" : false,\n \"fielddata_fields\" : [ \"class\" ],\n \"name\" : \"class\"\n }\n }\n },\n \"fielddata_fields\" : [ \"name\" ]\n}\n```\n\nThe `document-ref` inner hits just always returns an empty array. Should this work (and, if so, any ideas why it isn't?), or is it beyond the means of what inner hits can currently do?\n", "comments": [ { "body": "I've created a simpler test case for this, and it seems a little clearer that this doesn't currently work.\n\n**Add mappings:**\n\n```\ncurl -XPOST 'http://localhost:9200/grandchildren' -d '{\n \"mappings\" : {\n \"parent\" : {\n \"properties\" : {\n \"parent-name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n },\n \"child\" : {\n \"_parent\" : {\n \"type\" : \"parent\"\n },\n \"_routing\" : {\n \"required\" : true\n },\n \"properties\" : {\n \"parent-name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n },\n \"grandchild\" : {\n \"_parent\" : {\n \"type\" : \"child\"\n },\n \"_routing\" : {\n \"required\" : true\n },\n \"properties\" : {\n \"grandchild-name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n }\n }\n}'\n```\n\n**Populate:**\n\n```\ncurl -XPOST 'http://localhost:9200/grandchildren/parent/parent' -d '{ \"parent-name\" : \"Parent\" }'\ncurl -XPOST 'http://localhost:9200/grandchildren/child/child?parent=parent&routing=parent' -d '{ \"child-name\" : \"Child\" }'\ncurl -XPOST 'http://localhost:9200/grandchildren/grandchild/grandchild?parent=child&routing=parent' -d '{ \"grandchild-name\" : \"Grandchild\" }'\n```\n\n**Query:**\n\n```\ncurl -XGET 'http://localhost:9200/grandchildren/_search?pretty' -d '{\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"match_all\" : {}\n },\n \"child_type\" : \"grandchild\",\n \"inner_hits\" : {\n \"name\" : \"grandchild\"\n }\n }\n },\n \"child_type\" : \"child\",\n \"inner_hits\" : {\n \"name\" : \"child\"\n }\n }\n }\n}'\n```\n\n**Result:**\n\n```\n{\n \"took\" : 2,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"parent\",\n \"_id\" : \"parent\",\n \"_score\" : 1.0,\n \"_source\":{ \"parent-name\" : \"Parent\" },\n \"inner_hits\" : {\n \"grandchild\" : {\n \"hits\" : {\n \"total\" : 0,\n \"max_score\" : null,\n \"hits\" : [ ]\n }\n },\n \"child\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"child\",\n \"_id\" : \"child\",\n \"_score\" : 1.0,\n \"_source\":{ \"child-name\" : \"Child\" }\n } ]\n }\n }\n }\n } ]\n }\n}\n```\n\nNot only is the grandchild hits empty, but they're also not nested within the child hits, so aren't going to give me what I want anyway. I'm not sure what the intended/expected behaviour would be here, but guess I need to try something else for now.\n", "created_at": "2015-05-12T13:52:39Z" }, { "body": "Above with inner hits at top-level:\n\n**Query:**\n\n```\ncurl -XGET 'http://localhost:9200/grandchildren/_search?pretty' -d '{\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"match_all\" : {}\n },\n \"child_type\" : \"grandchild\"\n }\n },\n \"child_type\" : \"child\"\n }\n },\n \"inner_hits\" : {\n \"child\" : {\n \"type\" : {\n \"child\" : {\n \"inner_hits\" : {\n \"child\" : {\n \"type\" : {\n \"grandchild\" : {}\n }\n }\n }\n }\n }\n }\n }\n}'\n```\n\n**Result:**\n\n```\n{\n \"took\" : 3,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"parent\",\n \"_id\" : \"parent\",\n \"_score\" : 1.0,\n \"_source\":{ \"parent-name\" : \"Parent\" },\n \"inner_hits\" : {\n \"child\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"child\",\n \"_id\" : \"child\",\n \"_score\" : 1.0,\n \"_source\":{ \"child-name\" : \"Child\" },\n \"inner_hits\" : {\n \"child\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"grandchild\",\n \"_id\" : \"grandchild\",\n \"_score\" : 1.0,\n \"_source\":{ \"grandchild-name\" : \"Grandchild\" }\n } ]\n }\n }\n }\n } ]\n }\n }\n }\n } ]\n }\n}\n```\n\nHere the grandchild inner hit is correctly found, and nested\n", "created_at": "2015-05-12T14:03:05Z" }, { "body": "[I'm using 1.5.2]\n", "created_at": "2015-05-12T14:12:57Z" }, { "body": "I managed to track down why this wasn't working. When parsing 'has child' queries, inner hits were always being added to the `parseContext`, and never as child inner hits to the parent inner hits.\n\nI've put together a very quick and dirty fix for this (https://github.com/lukens/elasticsearch/commit/fb22d622e7b24074b5f4fe3e26cffd4c38cff75b) which has got it working for my case, but I don't think is suitable for inclusion in a release.\n\nIssues I see with my with my fix:\n1. It's just messy, it doesn't really fix the issue, but just cleans up after it. Children still add their inner hits to the `parseContext`, the parent just removes them again afterwards, and adds them as children to its own inner hits.\n2. The parent removes them again afterwards by mutating a Map it gets from the current `SearchContext`'s `InnerHitsContext`. This is obviously bad and messy, and would be broken if `InnerHitsContext` was changed to return a copy of the map, rather than the map itself. Nasty dependencies between classes.\n3. I think a child can specify inner hits even if a parent doesn't. These would currently get lost. I'm not sure what the behaviour should be here, it should probably be considered an invalid query.\n4. If this was done properly, descendants at different levels could add inner hits with the same name, whereas currently this could cause issues. Maybe you shouldn't be able to have the same name at different levels, but if implemented correctly, there should be no need to enforce this. \n\nI considered submitting as a pull request, but felt it was far too rough and ready.\n", "created_at": "2015-05-13T12:44:35Z" }, { "body": "@lukens If you want nested inner hits then you need to use the top level inner hits, the inner_hits on a query doesn't support nesting. The fact that your grandparents inner hits is empty is clearly a bug, thanks for bringing this up!\n", "created_at": "2015-05-13T16:41:55Z" }, { "body": "Hi, it's grandchild, rather than grandparent, that isn't working. Though you also seem to be suggesting it shouldn't be. Either way, the change I committed shows that it can work, my change just isn't a very nice way to make it work.\n", "created_at": "2015-05-14T09:30:04Z" }, { "body": "Ah, or are you saying it should work, but just shouldn't be nested? I'm not really sure what the point of it would be if it wasn't nested, though that may just be because it doesn't fit my use case, and I can't think of a use case where it would be useful.\n\nI think it would be good if nesting did work, as that would still allow either use case, really.\n\nThe problem with top level inner_hits is that I have to apply the query once in the has_child query, and then again in the inner_hits query, which makes everything slower than it would otherwise need to be.\n", "created_at": "2015-05-14T09:36:15Z" }, { "body": "@lukens yes, I meant grandchild. The reason it is a bug is, because the inner_hits in your response shouldn't be empty.\n\nThe top level inner hits and inner hits defined on a query internally to ES is the same thing and either way of defining inner hits will yield the same performance in terms of query time. The nested inner hits support in the query dsl was left out to reduce complexity and most of the times there is just a single level relationship. Obviously that means for your use case that you need to use top level inner hits. \n\nMaybe the inner hits support in the query dsl should support multi level relationships too, but I think the parsing logic shouldn't be get super complex because this. I need to think more about this. Like you said if it the grandchild isn't nested its hits in the response, then it isn't very helpful.\n\nThe only overhead of top level inner hits is that queries are defined twice, so the request body gets larger. If you're concerned with that, you can consider using search templates, so that you don't you reduce the amount of data send to your cluster.\n", "created_at": "2015-05-14T10:20:30Z" }, { "body": "Isn't there also an overhead in running the query twice with the top level inner_hits, or does Elasticsearch do something clever so that it only gets run once? (and, if so, does it need to be specified again in the inner hits?)\n\nI've not yet come across search templates, are these compatible with highly dynamic searches?\n\nI'd like try and spend a bit more time on getting nesting working in a less hacky manner, but am under enormous pressure just to get a project completed at the moment. For the case when grandchildren aren't nested in the query, I guess the simple solution is to also not nest them in the response. I don't think this would overcomplicate the parsing too much.\n\nI have a fairly nice way to handle all this in my mind, just not the time to implement it at the moment.\n", "created_at": "2015-05-14T10:46:55Z" }, { "body": "OK, switching to top level hits doesn't seem to have affected performance, so I can work with that for now. It had seemed much slower before, but once I'd actually got inner_hits on the query working, that ended up just as slow, until I tweaked some other things.\n\nThe \"fix\" I've committed above is probably only really of any use as a reference if you want to go with nesting, as it doesn't solve the problem when not nested. I expect there's something in the inner workings of inner_hits that currently relies on them being nested.\n", "created_at": "2015-05-14T11:12:21Z" }, { "body": "> Isn't there also an overhead in running the query twice with the top level inner_hits, or does Elasticsearch do something clever so that it only gets run once? (and, if so, does it need to be specified again in the inner hits?)\n\ninner_hits runs as part of the fetch phase and always executes an additional search to fetch the inner hits. The search being executed is cheap. It only runs an a single shard and just runs a query that fetches top child docs that matches with `_parent:[parent_id]` (all docs associated with parent `parent_id`) and the inner query defined in the `has_child` query. This is a query that ES (actually Lucene) can execute relatively quickly. This mini search is executed for each hit being returned.\n\n> I've not yet come across search templates, are these compatible with highly dynamic searches?\n\nYes, the dynamic part of the search request can be templated.\n\n> The \"fix\" I've committed above is probably only really of any use as a reference if you want to go with nesting, as it doesn't solve the problem when not nested. I expect there's something in the inner workings of inner_hits that currently relies on them being nested.\n\nYes, the inner hits features relies on the fact that grandchild and child are nested. When using the top-level inner hits notation this works out, but when using inner_hits as part of the query dsl this doesn't work out, because grandchild inner hit definition isn't nested under the child inner hit definition.\n\nIn order to fix this properly the query dsl parsing logic should just support nested inner hits. I think the format doesn't need to change in order to support this. Just because the fact the two `has_child` queries are nested should be enough for automatically nest the two inner hit definitions.\n", "created_at": "2015-05-15T09:55:48Z" }, { "body": "Allso seeing this issue, in my case multi-level nested documents (as in https://github.com/elastic/elasticsearch/issues/13064). Would be great to get a solution to this.\n", "created_at": "2015-09-01T23:28:25Z" }, { "body": "Will this issue be fixed at 2.x? Really looking forward to see the nested inner hits in query dsl.\n", "created_at": "2015-11-02T11:08:31Z" }, { "body": "When will this be fixed, its a blocker for us !! \n", "created_at": "2015-11-26T13:49:40Z" }, { "body": "+1\n", "created_at": "2015-12-20T13:30:05Z" }, { "body": "+1\n", "created_at": "2016-01-09T11:29:34Z" }, { "body": "@martijnvg just pinging you about this as a reminder\n", "created_at": "2016-01-18T19:16:53Z" }, { "body": "+1\n", "created_at": "2016-02-01T13:21:45Z" }, { "body": "+1\n", "created_at": "2016-03-01T09:54:42Z" }, { "body": "Hi Everyone,\r\nwhere is document data ", "created_at": "2017-04-27T05:37:57Z" }, { "body": "It looks like this issue is still unsolved, at least in elasticsearch 7.1", "created_at": "2020-02-28T03:06:16Z" } ], "number": 11118, "title": "has_child and inner_hits for grandchild hit doesn't work" }
{ "body": "Also Fixed a limitation that prevent from hierarchical inner hits be defined in query dsl.\n\nCloses #11118\n", "number": 17816, "review_comments": [ { "body": "Not sure we need this. For instance we don't allow to remove sorts on a SearchRequestBuilder.\n", "created_at": "2016-04-26T07:39:47Z" }, { "body": "Could we avoid modifying the object that is provided?\n", "created_at": "2016-04-26T07:40:11Z" }, { "body": "similar concern about modifying the value that is set\n", "created_at": "2016-04-26T07:41:05Z" }, { "body": "All these instanceof calls look fragile to me. I'm wondering that query builders could have an API to expose their sum query builders instead?\n", "created_at": "2016-04-26T08:26:30Z" }, { "body": "Hmm thinking about it more I think I think the instanceof way would be fine if we had a way to fail if inner hits are defined below an unsupported query?\n", "created_at": "2016-04-26T08:31:58Z" }, { "body": "true, this would make the result clearer, instead of silently not inlining the child inner hits and error would be thrown.\n", "created_at": "2016-04-26T08:56:27Z" }, { "body": "here too I think we should avoid modifying query builders that have been provided by the user\n", "created_at": "2016-04-26T08:58:43Z" }, { "body": "what about filter clauses?\n", "created_at": "2016-04-26T15:02:20Z" }, { "body": "these are provided as constructor argument.\n", "created_at": "2016-04-26T15:03:32Z" }, { "body": "ohhhhhh! :)\n", "created_at": "2016-04-26T15:04:50Z" }, { "body": "It seem to me that it would be possible to not call `setParentChildType` here, but to make a copy in `extractInnerHitBuilders` and call `setParentChildType` there on the copy? This way we would never modify a setter argument?\n", "created_at": "2016-04-28T09:16:29Z" }, { "body": "and we could also make the `setNestedPath` and `setParentChildType` methods pkg-private?\n", "created_at": "2016-04-28T09:17:36Z" }, { "body": "can you throw an exception in the else clause, eg. \"All queries must extend AbstractQueryBuilder but ...\"\n", "created_at": "2016-04-28T09:18:27Z" }, { "body": "do we need to deep copy some objects in that list?\n", "created_at": "2016-04-28T09:21:51Z" }, { "body": "also can we have a test for this method? Something that creates a random InnerHitBuilder, makes a copy, ensure that equals returns true, then modifies the copy and makes sure equals return false?\n", "created_at": "2016-04-28T09:23:48Z" }, { "body": "s/extract/extracts/\n", "created_at": "2016-04-29T07:53:19Z" }, { "body": "are the NESTED_PATH_FIELD and PARENT_CHILD_TYPE_FIELD fields still useful? I would have expected them to be useless now that the top level syntax is gone?\n", "created_at": "2016-04-29T08:02:02Z" }, { "body": "can we get rid of setParentChildType and setNestedPath now that they seem to be set through the constructore above?\n", "created_at": "2016-04-29T08:04:20Z" }, { "body": "It is used in the `build(...)` method to decide what sub search context implementation to build.\n", "created_at": "2016-04-29T08:06:21Z" } ], "title": "Drop top level inner hits in favour of inner hits defined in the query dsl" }
{ "commits": [ { "message": "Drop top level inner hits in favour of inner hits defined in the query dsl.\n\nFix a limitation that prevent from hierarchical inner hits be defined in query dsl.\n\nRemoved the nested_path, parent_child_type and query options from inner hits dsl. These options are only set by ES\nupon parsing the has_child, has_parent and nested queries are using their respective query builders.\n\nThese options are still used internally, when these options are set a new private copy is created based on the\nprovided InnerHitBuilder and configuring either nested_path or parent_child_type and the inner query of the query builder\nbeing used.\n\nCloses #11118" } ], "files": [ { "diff": "@@ -31,7 +31,6 @@\n import org.elasticsearch.search.aggregations.AggregatorBuilder;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilder;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n-import org.elasticsearch.index.query.support.InnerHitsBuilder;\n import org.elasticsearch.search.highlight.HighlightBuilder;\n import org.elasticsearch.search.rescore.RescoreBuilder;\n import org.elasticsearch.search.sort.SortBuilder;\n@@ -400,11 +399,6 @@ public SearchRequestBuilder suggest(SuggestBuilder suggestBuilder) {\n return this;\n }\n \n- public SearchRequestBuilder innerHits(InnerHitsBuilder innerHitsBuilder) {\n- sourceBuilder().innerHits(innerHitsBuilder);\n- return this;\n- }\n-\n /**\n * Clears all rescorers on the builder and sets the first one. To use multiple rescore windows use\n * {@link #addRescorer(org.elasticsearch.search.rescore.RescoreBuilder, int)}.", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java", "status": "modified" }, { "diff": "@@ -36,6 +36,7 @@\n import java.util.ArrayList;\n import java.util.Collection;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n \n /**\n@@ -273,6 +274,15 @@ protected QueryBuilder<?> doRewrite(QueryRewriteContext queryShardContext) throw\n return this;\n }\n \n+ /**\n+ * For internal usage only!\n+ *\n+ * Extracts the inner hits from the query tree.\n+ * While it extracts inner hits, child inner hits are inlined into the inner hit builder they belong to.\n+ */\n+ protected void extractInnerHitBuilders(Map<String, InnerHitBuilder> innerHits) {\n+ }\n+\n // Like Objects.requireNotNull(...) but instead throws a IllegalArgumentException\n protected static <T> T requireValue(T value, String message) {\n if (value == null) {", "filename": "core/src/main/java/org/elasticsearch/index/query/AbstractQueryBuilder.java", "status": "modified" }, { "diff": "@@ -35,6 +35,7 @@\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n import java.util.function.Consumer;\n \n@@ -495,6 +496,17 @@ protected QueryBuilder<?> doRewrite(QueryRewriteContext queryRewriteContext) thr\n return this;\n }\n \n+ @Override\n+ protected void extractInnerHitBuilders(Map<String, InnerHitBuilder> innerHits) {\n+ List<QueryBuilder<?>> clauses = new ArrayList<>(filter());\n+ clauses.addAll(must());\n+ clauses.addAll(should());\n+ // no need to include must_not (since there will be no hits for it)\n+ for (QueryBuilder<?> clause : clauses) {\n+ InnerHitBuilder.extractInnerHits(clause, innerHits);\n+ }\n+ }\n+\n private static boolean rewriteClauses(QueryRewriteContext queryRewriteContext, List<QueryBuilder<?>> builders,\n Consumer<QueryBuilder<?>> consumer) throws IOException {\n boolean changed = false;", "filename": "core/src/main/java/org/elasticsearch/index/query/BoolQueryBuilder.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n /**\n@@ -235,4 +236,10 @@ protected QueryBuilder<?> doRewrite(QueryRewriteContext queryRewriteContext) thr\n }\n return this;\n }\n+\n+ @Override\n+ protected void extractInnerHitBuilders(Map<String, InnerHitBuilder> innerHits) {\n+ InnerHitBuilder.extractInnerHits(positiveQuery, innerHits);\n+ InnerHitBuilder.extractInnerHits(negativeQuery, innerHits);\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/BoostingQueryBuilder.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n /**\n@@ -169,4 +170,9 @@ protected QueryBuilder<?> doRewrite(QueryRewriteContext queryRewriteContext) thr\n }\n return this;\n }\n+\n+ @Override\n+ protected void extractInnerHitBuilders(Map<String, InnerHitBuilder> innerHits) {\n+ InnerHitBuilder.extractInnerHits(filterBuilder, innerHits);\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryBuilder.java", "status": "modified" }, { "diff": "@@ -38,10 +38,10 @@\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n \n import java.io.IOException;\n import java.util.Locale;\n+import java.util.Map;\n import java.util.Objects;\n \n /**\n@@ -151,9 +151,7 @@ public InnerHitBuilder innerHit() {\n }\n \n public HasChildQueryBuilder innerHit(InnerHitBuilder innerHit) {\n- innerHit.setParentChildType(type);\n- innerHit.setQuery(query);\n- this.innerHitBuilder = innerHit;\n+ this.innerHitBuilder = new InnerHitBuilder(Objects.requireNonNull(innerHit), query, type);\n return this;\n }\n \n@@ -274,8 +272,11 @@ public static HasChildQueryBuilder fromXContent(QueryParseContext parseContext)\n }\n }\n }\n- HasChildQueryBuilder hasChildQueryBuilder = new HasChildQueryBuilder(childType, iqb, minChildren, maxChildren,\n- scoreMode, innerHitBuilder);\n+ HasChildQueryBuilder hasChildQueryBuilder = new HasChildQueryBuilder(childType, iqb, scoreMode);\n+ if (innerHitBuilder != null) {\n+ hasChildQueryBuilder.innerHit(innerHitBuilder);\n+ }\n+ hasChildQueryBuilder.minMaxChildren(minChildren, maxChildren);\n hasChildQueryBuilder.queryName(queryName);\n hasChildQueryBuilder.boost(boost);\n hasChildQueryBuilder.ignoreUnmapped(ignoreUnmapped);\n@@ -337,10 +338,6 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n if (parentFieldMapper.active() == false) {\n throw new QueryShardException(context, \"[\" + NAME + \"] _parent field has no parent type configured\");\n }\n- if (innerHitBuilder != null) {\n- context.addInnerHit(innerHitBuilder);\n- }\n-\n String parentType = parentFieldMapper.type();\n DocumentMapper parentDocMapper = context.getMapperService().documentMapper(parentType);\n if (parentDocMapper == null) {\n@@ -477,4 +474,11 @@ protected QueryBuilder<?> doRewrite(QueryRewriteContext queryRewriteContext) thr\n }\n return this;\n }\n+\n+ @Override\n+ protected void extractInnerHitBuilders(Map<String, InnerHitBuilder> innerHits) {\n+ if (innerHitBuilder != null) {\n+ innerHitBuilder.inlineInnerHits(innerHits);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/HasChildQueryBuilder.java", "status": "modified" }, { "diff": "@@ -33,10 +33,10 @@\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n \n import java.io.IOException;\n import java.util.HashSet;\n+import java.util.Map;\n import java.util.Objects;\n import java.util.Set;\n \n@@ -127,9 +127,7 @@ public InnerHitBuilder innerHit() {\n }\n \n public HasParentQueryBuilder innerHit(InnerHitBuilder innerHit) {\n- innerHit.setParentChildType(type);\n- innerHit.setQuery(query);\n- this.innerHit = innerHit;\n+ this.innerHit = new InnerHitBuilder(innerHit, query, type);\n return this;\n }\n \n@@ -175,10 +173,6 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n }\n }\n \n- if (innerHit != null) {\n- context.addInnerHit(innerHit);\n- }\n-\n Set<String> childTypes = new HashSet<>();\n ParentChildIndexFieldData parentChildIndexFieldData = null;\n for (DocumentMapper documentMapper : context.getMapperService().docMappers(false)) {\n@@ -282,8 +276,14 @@ public static HasParentQueryBuilder fromXContent(QueryParseContext parseContext)\n }\n }\n }\n- return new HasParentQueryBuilder(parentType, iqb, score, innerHits).ignoreUnmapped(ignoreUnmapped).queryName(queryName)\n+ HasParentQueryBuilder queryBuilder = new HasParentQueryBuilder(parentType, iqb, score)\n+ .ignoreUnmapped(ignoreUnmapped)\n+ .queryName(queryName)\n .boost(boost);\n+ if (innerHits != null) {\n+ queryBuilder.innerHit(innerHits);\n+ }\n+ return queryBuilder;\n }\n \n @Override\n@@ -313,4 +313,11 @@ protected QueryBuilder<?> doRewrite(QueryRewriteContext queryShardContext) throw\n }\n return this;\n }\n+\n+ @Override\n+ protected void extractInnerHitBuilders(Map<String, InnerHitBuilder> innerHits) {\n+ if (innerHit!= null) {\n+ innerHit.inlineInnerHits(innerHits);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/HasParentQueryBuilder.java", "status": "modified" }, { "diff": "@@ -32,9 +32,9 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n public class NestedQueryBuilder extends AbstractQueryBuilder<NestedQueryBuilder> {\n@@ -109,9 +109,7 @@ public InnerHitBuilder innerHit() {\n }\n \n public NestedQueryBuilder innerHit(InnerHitBuilder innerHit) {\n- innerHit.setNestedPath(path);\n- innerHit.setQuery(query);\n- this.innerHitBuilder = innerHit;\n+ this.innerHitBuilder = new InnerHitBuilder(innerHit, path, query);\n return this;\n }\n \n@@ -196,8 +194,14 @@ public static NestedQueryBuilder fromXContent(QueryParseContext parseContext) th\n }\n }\n }\n- return new NestedQueryBuilder(path, query, scoreMode, innerHitBuilder).ignoreUnmapped(ignoreUnmapped).queryName(queryName)\n+ NestedQueryBuilder queryBuilder = new NestedQueryBuilder(path, query, scoreMode)\n+ .ignoreUnmapped(ignoreUnmapped)\n+ .queryName(queryName)\n .boost(boost);\n+ if (innerHitBuilder != null) {\n+ queryBuilder.innerHit(innerHitBuilder);\n+ }\n+ return queryBuilder;\n }\n \n @Override\n@@ -236,9 +240,6 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n final Query childFilter;\n final Query innerQuery;\n ObjectMapper objectMapper = context.nestedScope().getObjectMapper();\n- if (innerHitBuilder != null) {\n- context.addInnerHit(innerHitBuilder);\n- }\n if (objectMapper == null) {\n parentFilter = context.bitsetFilter(Queries.newNonNestedFilter());\n } else {\n@@ -265,4 +266,11 @@ protected QueryBuilder<?> doRewrite(QueryRewriteContext queryRewriteContext) thr\n }\n return this;\n }\n+\n+ @Override\n+ protected void extractInnerHitBuilders(Map<String, InnerHitBuilder> innerHits) {\n+ if (innerHitBuilder != null) {\n+ innerHitBuilder.inlineInnerHits(innerHits);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java", "status": "modified" }, { "diff": "@@ -57,12 +57,10 @@\n import org.elasticsearch.index.mapper.core.TextFieldMapper;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n import org.elasticsearch.index.percolator.PercolatorQueryCache;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n import org.elasticsearch.index.query.support.NestedScope;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.indices.query.IndicesQueriesRegistry;\n import org.elasticsearch.script.ScriptService;\n-import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.lookup.SearchLookup;\n \n@@ -185,16 +183,6 @@ public boolean isFilter() {\n return isFilter;\n }\n \n- public void addInnerHit(InnerHitBuilder innerHitBuilder) throws IOException {\n- SearchContext sc = SearchContext.current();\n- if (sc == null) {\n- throw new QueryShardException(this, \"inner_hits unsupported\");\n- }\n-\n- InnerHitsContext innerHitsContext = sc.innerHits();\n- innerHitsContext.addInnerHitDefinition(innerHitBuilder.buildInline(sc, this));\n- }\n-\n public Collection<String> simpleMatchToIndexNames(String pattern) {\n return mapperService.simpleMatchToIndexNames(pattern);\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java", "status": "modified" }, { "diff": "@@ -42,12 +42,14 @@\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryRewriteContext;\n import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.index.query.InnerHitBuilder;\n \n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.List;\n import java.util.Locale;\n+import java.util.Map;\n import java.util.Objects;\n \n /**\n@@ -429,8 +431,15 @@ protected QueryBuilder<?> doRewrite(QueryRewriteContext queryRewriteContext) thr\n return this;\n }\n \n+\n+\n+ @Override\n+ protected void extractInnerHitBuilders(Map<String, InnerHitBuilder> innerHits) {\n+ InnerHitBuilder.extractInnerHits(query(), innerHits);\n+ }\n+\n public static FunctionScoreQueryBuilder fromXContent(ParseFieldRegistry<ScoreFunctionParser<?>> scoreFunctionsRegistry,\n- QueryParseContext parseContext) throws IOException {\n+ QueryParseContext parseContext) throws IOException {\n XContentParser parser = parseContext.parser();\n \n QueryBuilder<?> query = null;", "filename": "core/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilder.java", "status": "modified" }, { "diff": "@@ -62,7 +62,7 @@\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryShardContext;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n+import org.elasticsearch.index.query.InnerHitBuilder;\n import org.elasticsearch.index.search.stats.StatsGroupsParseElement;\n import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n@@ -88,7 +88,6 @@\n import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsContext;\n import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsContext.FieldDataField;\n import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsFetchSubPhase;\n-import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n import org.elasticsearch.search.fetch.script.ScriptFieldsContext.ScriptField;\n import org.elasticsearch.search.highlight.HighlightBuilder;\n import org.elasticsearch.search.internal.DefaultSearchContext;\n@@ -679,12 +678,24 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc\n context.queryBoost(indexBoost);\n }\n }\n+ Map<String, InnerHitBuilder> innerHitBuilders = new HashMap<>();\n if (source.query() != null) {\n+ InnerHitBuilder.extractInnerHits(source.query(), innerHitBuilders);\n context.parsedQuery(queryShardContext.toQuery(source.query()));\n }\n if (source.postFilter() != null) {\n+ InnerHitBuilder.extractInnerHits(source.postFilter(), innerHitBuilders);\n context.parsedPostFilter(queryShardContext.toQuery(source.postFilter()));\n }\n+ if (innerHitBuilders.size() > 0) {\n+ for (Map.Entry<String, InnerHitBuilder> entry : innerHitBuilders.entrySet()) {\n+ try {\n+ entry.getValue().build(context, context.innerHits());\n+ } catch (IOException e) {\n+ throw new SearchContextException(context, \"failed to build inner_hits\", e);\n+ }\n+ }\n+ }\n if (source.sorts() != null) {\n try {\n Optional<Sort> optionalSort = SortBuilder.buildSort(source.sorts(), context.getQueryShardContext());\n@@ -754,25 +765,6 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc\n throw new SearchContextException(context, \"failed to create SearchContextHighlighter\", e);\n }\n }\n- if (source.innerHits() != null) {\n- for (Map.Entry<String, InnerHitBuilder> entry : source.innerHits().getInnerHitsBuilders().entrySet()) {\n- try {\n- // This is the same logic in QueryShardContext#toQuery() where we reset also twice.\n- // Personally I think a reset at the end is sufficient, but I kept the logic consistent with this method.\n-\n- // The reason we need to invoke reset at all here is because inner hits may modify the QueryShardContext#nestedScope,\n- // so we need to reset at the end.\n- queryShardContext.reset();\n- InnerHitBuilder innerHitBuilder = entry.getValue();\n- InnerHitsContext innerHitsContext = context.innerHits();\n- innerHitBuilder.buildTopLevel(context, queryShardContext, innerHitsContext);\n- } catch (IOException e) {\n- throw new SearchContextException(context, \"failed to create InnerHitsContext\", e);\n- } finally {\n- queryShardContext.reset();\n- }\n- }\n- }\n if (source.scriptFields() != null) {\n for (org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField field : source.scriptFields()) {\n SearchScript searchScript = context.scriptService().search(context.lookup(), field.script(), ScriptContext.Standard.SEARCH,", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -40,7 +40,6 @@\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryShardContext;\n-import org.elasticsearch.index.query.support.InnerHitsBuilder;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.aggregations.AggregatorBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n@@ -93,7 +92,6 @@ public final class SearchSourceBuilder extends ToXContentToBytes implements Writ\n public static final ParseField INDICES_BOOST_FIELD = new ParseField(\"indices_boost\");\n public static final ParseField AGGREGATIONS_FIELD = new ParseField(\"aggregations\", \"aggs\");\n public static final ParseField HIGHLIGHT_FIELD = new ParseField(\"highlight\");\n- public static final ParseField INNER_HITS_FIELD = new ParseField(\"inner_hits\");\n public static final ParseField SUGGEST_FIELD = new ParseField(\"suggest\");\n public static final ParseField RESCORE_FIELD = new ParseField(\"rescore\");\n public static final ParseField STATS_FIELD = new ParseField(\"stats\");\n@@ -156,8 +154,6 @@ public static HighlightBuilder highlight() {\n \n private SuggestBuilder suggestBuilder;\n \n- private InnerHitsBuilder innerHitsBuilder;\n-\n private List<RescoreBuilder<?>> rescoreBuilders;\n \n private ObjectFloatHashMap<String> indexBoost = null;\n@@ -205,14 +201,11 @@ public SearchSourceBuilder(StreamInput in) throws IOException {\n boolean hasIndexBoost = in.readBoolean();\n if (hasIndexBoost) {\n int size = in.readVInt();\n- indexBoost = new ObjectFloatHashMap<String>(size);\n+ indexBoost = new ObjectFloatHashMap<>(size);\n for (int i = 0; i < size; i++) {\n indexBoost.put(in.readString(), in.readFloat());\n }\n }\n- if (in.readBoolean()) {\n- innerHitsBuilder = new InnerHitsBuilder(in);\n- }\n if (in.readBoolean()) {\n minScore = in.readFloat();\n }\n@@ -303,11 +296,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeFloat(indexBoost.get(key.value));\n }\n }\n- boolean hasInnerHitsBuilder = innerHitsBuilder != null;\n- out.writeBoolean(hasInnerHitsBuilder);\n- if (hasInnerHitsBuilder) {\n- innerHitsBuilder.writeTo(out);\n- }\n boolean hasMinScore = minScore != null;\n out.writeBoolean(hasMinScore);\n if (hasMinScore) {\n@@ -653,15 +641,6 @@ public HighlightBuilder highlighter() {\n return highlightBuilder;\n }\n \n- public SearchSourceBuilder innerHits(InnerHitsBuilder innerHitsBuilder) {\n- this.innerHitsBuilder = innerHitsBuilder;\n- return this;\n- }\n-\n- public InnerHitsBuilder innerHits() {\n- return innerHitsBuilder;\n- }\n-\n public SearchSourceBuilder suggest(SuggestBuilder suggestBuilder) {\n this.suggestBuilder = suggestBuilder;\n return this;\n@@ -957,7 +936,6 @@ private SearchSourceBuilder shallowCopy(QueryBuilder<?> queryBuilder, QueryBuild\n rewrittenBuilder.from = from;\n rewrittenBuilder.highlightBuilder = highlightBuilder;\n rewrittenBuilder.indexBoost = indexBoost;\n- rewrittenBuilder.innerHitsBuilder = innerHitsBuilder;\n rewrittenBuilder.minScore = minScore;\n rewrittenBuilder.postQueryBuilder = postQueryBuilder;\n rewrittenBuilder.profile = profile;\n@@ -1051,8 +1029,6 @@ public void parseXContent(QueryParseContext context, AggregatorParsers aggParser\n aggregations = aggParsers.parseAggregators(context);\n } else if (context.getParseFieldMatcher().match(currentFieldName, HIGHLIGHT_FIELD)) {\n highlightBuilder = HighlightBuilder.fromXContent(context);\n- } else if (context.getParseFieldMatcher().match(currentFieldName, INNER_HITS_FIELD)) {\n- innerHitsBuilder = InnerHitsBuilder.fromXContent(context);\n } else if (context.getParseFieldMatcher().match(currentFieldName, SUGGEST_FIELD)) {\n suggestBuilder = SuggestBuilder.fromXContent(context, suggesters);\n } else if (context.getParseFieldMatcher().match(currentFieldName, SORT_FIELD)) {\n@@ -1235,10 +1211,6 @@ public void innerToXContent(XContentBuilder builder, Params params) throws IOExc\n builder.field(HIGHLIGHT_FIELD.getPreferredName(), highlightBuilder);\n }\n \n- if (innerHitsBuilder != null) {\n- builder.field(INNER_HITS_FIELD.getPreferredName(), innerHitsBuilder, params);\n- }\n-\n if (suggestBuilder != null) {\n builder.field(SUGGEST_FIELD.getPreferredName(), suggestBuilder);\n }\n@@ -1379,7 +1351,7 @@ public boolean equals(Object obj) {\n @Override\n public int hashCode() {\n return Objects.hash(aggregations, explain, fetchSourceContext, fieldDataFields, fieldNames, from,\n- highlightBuilder, indexBoost, innerHitsBuilder, minScore, postQueryBuilder, queryBuilder, rescoreBuilders, scriptFields,\n+ highlightBuilder, indexBoost, minScore, postQueryBuilder, queryBuilder, rescoreBuilders, scriptFields,\n size, sorts, searchAfterBuilder, stats, suggestBuilder, terminateAfter, timeoutInMillis, trackScores, version, profile);\n }\n \n@@ -1400,7 +1372,6 @@ public boolean equals(Object obj) {\n && Objects.equals(from, other.from)\n && Objects.equals(highlightBuilder, other.highlightBuilder)\n && Objects.equals(indexBoost, other.indexBoost)\n- && Objects.equals(innerHitsBuilder, other.innerHitsBuilder)\n && Objects.equals(minScore, other.minScore)\n && Objects.equals(postQueryBuilder, other.postQueryBuilder)\n && Objects.equals(queryBuilder, other.queryBuilder)", "filename": "core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java", "status": "modified" }, { "diff": "@@ -42,7 +42,6 @@\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.internal.TypeFieldMapper;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.script.Script.ScriptParseException;\n import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n@@ -53,6 +52,8 @@\n \n import java.io.IOException;\n import java.util.Collections;\n+import java.util.HashMap;\n+import java.util.Map;\n \n import static org.hamcrest.CoreMatchers.containsString;\n import static org.hamcrest.CoreMatchers.equalTo;\n@@ -125,18 +126,24 @@ protected void doAssertLuceneQuery(HasChildQueryBuilder queryBuilder, Query quer\n assertEquals(queryBuilder.scoreMode(), lpq.getScoreMode()); // WTF is this why do we have two?\n }\n if (queryBuilder.innerHit() != null) {\n- assertNotNull(SearchContext.current());\n+ SearchContext searchContext = SearchContext.current();\n+ assertNotNull(searchContext);\n if (query != null) {\n- assertNotNull(SearchContext.current().innerHits());\n- assertEquals(1, SearchContext.current().innerHits().getInnerHits().size());\n- assertTrue(SearchContext.current().innerHits().getInnerHits().containsKey(queryBuilder.innerHit().getName()));\n+ Map<String, InnerHitBuilder> innerHitBuilders = new HashMap<>();\n+ InnerHitBuilder.extractInnerHits(queryBuilder, innerHitBuilders);\n+ for (InnerHitBuilder builder : innerHitBuilders.values()) {\n+ builder.build(searchContext, searchContext.innerHits());\n+ }\n+ assertNotNull(searchContext.innerHits());\n+ assertEquals(1, searchContext.innerHits().getInnerHits().size());\n+ assertTrue(searchContext.innerHits().getInnerHits().containsKey(queryBuilder.innerHit().getName()));\n InnerHitsContext.BaseInnerHits innerHits =\n- SearchContext.current().innerHits().getInnerHits().get(queryBuilder.innerHit().getName());\n+ searchContext.innerHits().getInnerHits().get(queryBuilder.innerHit().getName());\n assertEquals(innerHits.size(), queryBuilder.innerHit().getSize());\n assertEquals(innerHits.sort().getSort().length, 1);\n assertEquals(innerHits.sort().getSort()[0].getField(), STRING_FIELD_NAME_2);\n } else {\n- assertThat(SearchContext.current().innerHits().getInnerHits().size(), equalTo(0));\n+ assertThat(searchContext.innerHits().getInnerHits().size(), equalTo(0));\n }\n }\n }\n@@ -188,7 +195,6 @@ public void testFromJson() throws IOException {\n \" \\\"boost\\\" : 2.0,\\n\" +\n \" \\\"_name\\\" : \\\"WNzYMJKRwePuRBh\\\",\\n\" +\n \" \\\"inner_hits\\\" : {\\n\" +\n- \" \\\"type\\\" : \\\"child\\\",\\n\" +\n \" \\\"name\\\" : \\\"inner_hits_name\\\",\\n\" +\n \" \\\"from\\\" : 0,\\n\" +\n \" \\\"size\\\" : 100,\\n\" +\n@@ -199,18 +205,7 @@ public void testFromJson() throws IOException {\n \" \\\"mapped_string\\\" : {\\n\" +\n \" \\\"order\\\" : \\\"asc\\\"\\n\" +\n \" }\\n\" +\n- \" } ],\\n\" +\n- \" \\\"query\\\" : {\\n\" +\n- \" \\\"range\\\" : {\\n\" +\n- \" \\\"mapped_string\\\" : {\\n\" +\n- \" \\\"from\\\" : \\\"agJhRET\\\",\\n\" +\n- \" \\\"to\\\" : \\\"zvqIq\\\",\\n\" +\n- \" \\\"include_lower\\\" : true,\\n\" +\n- \" \\\"include_upper\\\" : true,\\n\" +\n- \" \\\"boost\\\" : 1.0\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n+ \" } ]\\n\" +\n \" }\\n\" +\n \" }\\n\" +\n \"}\";\n@@ -223,11 +218,11 @@ public void testFromJson() throws IOException {\n assertEquals(query, queryBuilder.childType(), \"child\");\n assertEquals(query, queryBuilder.scoreMode(), ScoreMode.Avg);\n assertNotNull(query, queryBuilder.innerHit());\n- assertEquals(query, queryBuilder.innerHit(), new InnerHitBuilder().setParentChildType(\"child\")\n+ InnerHitBuilder expected = new InnerHitBuilder(new InnerHitBuilder(), queryBuilder.query(), \"child\")\n .setName(\"inner_hits_name\")\n .setSize(100)\n- .addSort(new FieldSortBuilder(\"mapped_string\").order(SortOrder.ASC))\n- .setQuery(queryBuilder.query()));\n+ .addSort(new FieldSortBuilder(\"mapped_string\").order(SortOrder.ASC));\n+ assertEquals(query, queryBuilder.innerHit(), expected);\n \n }\n public void testToQueryInnerQueryType() throws IOException {", "filename": "core/src/test/java/org/elasticsearch/index/query/HasChildQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.query;\n \n-import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n import com.fasterxml.jackson.core.JsonParseException;\n \n import org.apache.lucene.search.MatchNoDocsQuery;\n@@ -34,7 +33,6 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.MapperService;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n import org.elasticsearch.script.Script.ScriptParseException;\n import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -43,7 +41,8 @@\n import org.junit.BeforeClass;\n \n import java.io.IOException;\n-import java.util.Arrays;\n+import java.util.HashMap;\n+import java.util.Map;\n \n import static org.hamcrest.CoreMatchers.containsString;\n import static org.hamcrest.CoreMatchers.equalTo;\n@@ -108,18 +107,24 @@ protected void doAssertLuceneQuery(HasParentQueryBuilder queryBuilder, Query que\n assertEquals(queryBuilder.score() ? ScoreMode.Max : ScoreMode.None, lpq.getScoreMode());\n }\n if (queryBuilder.innerHit() != null) {\n- assertNotNull(SearchContext.current());\n+ SearchContext searchContext = SearchContext.current();\n+ assertNotNull(searchContext);\n if (query != null) {\n- assertNotNull(SearchContext.current().innerHits());\n- assertEquals(1, SearchContext.current().innerHits().getInnerHits().size());\n- assertTrue(SearchContext.current().innerHits().getInnerHits().containsKey(queryBuilder.innerHit().getName()));\n- InnerHitsContext.BaseInnerHits innerHits = SearchContext.current().innerHits()\n+ Map<String, InnerHitBuilder> innerHitBuilders = new HashMap<>();\n+ InnerHitBuilder.extractInnerHits(queryBuilder, innerHitBuilders);\n+ for (InnerHitBuilder builder : innerHitBuilders.values()) {\n+ builder.build(searchContext, searchContext.innerHits());\n+ }\n+ assertNotNull(searchContext.innerHits());\n+ assertEquals(1, searchContext.innerHits().getInnerHits().size());\n+ assertTrue(searchContext.innerHits().getInnerHits().containsKey(queryBuilder.innerHit().getName()));\n+ InnerHitsContext.BaseInnerHits innerHits = searchContext.innerHits()\n .getInnerHits().get(queryBuilder.innerHit().getName());\n assertEquals(innerHits.size(), queryBuilder.innerHit().getSize());\n assertEquals(innerHits.sort().getSort().length, 1);\n assertEquals(innerHits.sort().getSort()[0].getField(), STRING_FIELD_NAME_2);\n } else {\n- assertThat(SearchContext.current().innerHits().getInnerHits().size(), equalTo(0));\n+ assertThat(searchContext.innerHits().getInnerHits().size(), equalTo(0));\n }\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/HasParentQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -21,20 +21,25 @@\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n \n+import com.fasterxml.jackson.core.JsonParseException;\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest;\n+import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.index.mapper.MapperService;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n+import org.elasticsearch.script.Script;\n import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.sort.FieldSortBuilder;\n import org.elasticsearch.search.sort.SortOrder;\n \n import java.io.IOException;\n+import java.util.HashMap;\n+import java.util.Map;\n \n import static org.hamcrest.CoreMatchers.containsString;\n import static org.hamcrest.CoreMatchers.equalTo;\n@@ -66,11 +71,11 @@ public void setUp() throws Exception {\n protected NestedQueryBuilder doCreateTestQueryBuilder() {\n NestedQueryBuilder nqb = new NestedQueryBuilder(\"nested1\", RandomQueryBuilder.createQuery(random()),\n RandomPicks.randomFrom(random(), ScoreMode.values()));\n- if (SearchContext.current() != null) {\n+ if (randomBoolean()) {\n nqb.innerHit(new InnerHitBuilder()\n .setName(randomAsciiOfLengthBetween(1, 10))\n .setSize(randomIntBetween(0, 100))\n- .addSort(new FieldSortBuilder(STRING_FIELD_NAME).order(SortOrder.ASC)));\n+ .addSort(new FieldSortBuilder(INT_FIELD_NAME).order(SortOrder.ASC)));\n }\n nqb.ignoreUnmapped(randomBoolean());\n return nqb;\n@@ -87,17 +92,23 @@ protected void doAssertLuceneQuery(NestedQueryBuilder queryBuilder, Query query,\n //TODO how to assert this?\n }\n if (queryBuilder.innerHit() != null) {\n- assertNotNull(SearchContext.current());\n+ SearchContext searchContext = SearchContext.current();\n+ assertNotNull(searchContext);\n if (query != null) {\n- assertNotNull(SearchContext.current().innerHits());\n- assertEquals(1, SearchContext.current().innerHits().getInnerHits().size());\n- assertTrue(SearchContext.current().innerHits().getInnerHits().containsKey(\"inner_hits_name\"));\n- InnerHitsContext.BaseInnerHits innerHits = SearchContext.current().innerHits().getInnerHits().get(\"inner_hits_name\");\n- assertEquals(innerHits.size(), 100);\n+ Map<String, InnerHitBuilder> innerHitBuilders = new HashMap<>();\n+ InnerHitBuilder.extractInnerHits(queryBuilder, innerHitBuilders);\n+ for (InnerHitBuilder builder : innerHitBuilders.values()) {\n+ builder.build(searchContext, searchContext.innerHits());\n+ }\n+ assertNotNull(searchContext.innerHits());\n+ assertEquals(1, searchContext.innerHits().getInnerHits().size());\n+ assertTrue(searchContext.innerHits().getInnerHits().containsKey(queryBuilder.innerHit().getName()));\n+ InnerHitsContext.BaseInnerHits innerHits = searchContext.innerHits().getInnerHits().get(queryBuilder.innerHit().getName());\n+ assertEquals(innerHits.size(), queryBuilder.innerHit().getSize());\n assertEquals(innerHits.sort().getSort().length, 1);\n- assertEquals(innerHits.sort().getSort()[0].getField(), STRING_FIELD_NAME);\n+ assertEquals(innerHits.sort().getSort()[0].getField(), INT_FIELD_NAME);\n } else {\n- assertThat(SearchContext.current().innerHits().getInnerHits().size(), equalTo(0));\n+ assertThat(searchContext.innerHits().getInnerHits().size(), equalTo(0));\n }\n }\n }\n@@ -163,6 +174,36 @@ public void testFromJson() throws IOException {\n assertEquals(json, ScoreMode.Avg, parsed.scoreMode());\n }\n \n+ /**\n+ * override superclass test, because here we need to take care that mutation doesn't happen inside\n+ * `inner_hits` structure, because we don't parse them yet and so no exception will be triggered\n+ * for any mutation there.\n+ */\n+ @Override\n+ public void testUnknownObjectException() throws IOException {\n+ String validQuery = createTestQueryBuilder().toString();\n+ assertThat(validQuery, containsString(\"{\"));\n+ int endPosition = validQuery.indexOf(\"inner_hits\");\n+ if (endPosition == -1) {\n+ endPosition = validQuery.length() - 1;\n+ }\n+ for (int insertionPosition = 0; insertionPosition < endPosition; insertionPosition++) {\n+ if (validQuery.charAt(insertionPosition) == '{') {\n+ String testQuery = validQuery.substring(0, insertionPosition) + \"{ \\\"newField\\\" : \" +\n+ validQuery.substring(insertionPosition) + \"}\";\n+ try {\n+ parseQuery(testQuery);\n+ fail(\"some parsing exception expected for query: \" + testQuery);\n+ } catch (ParsingException | Script.ScriptParseException | ElasticsearchParseException e) {\n+ // different kinds of exception wordings depending on location\n+ // of mutation, so no simple asserts possible here\n+ } catch (JsonParseException e) {\n+ // mutation produced invalid json\n+ }\n+ }\n+ }\n+ }\n+\n public void testIgnoreUnmapped() throws IOException {\n final NestedQueryBuilder queryBuilder = new NestedQueryBuilder(\"unmapped\", new MatchAllQueryBuilder(), ScoreMode.None);\n queryBuilder.ignoreUnmapped(true);", "filename": "core/src/test/java/org/elasticsearch/index/query/NestedQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -43,7 +43,7 @@\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.index.query.QueryShardException;\n import org.elasticsearch.index.query.functionscore.WeightBuilder;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n+import org.elasticsearch.index.query.InnerHitBuilder;\n import org.elasticsearch.search.highlight.HighlightBuilder;\n import org.elasticsearch.test.ESIntegTestCase;\n \n@@ -1827,35 +1827,6 @@ public void testGeoShapeWithMapUnmappedFieldAsString() throws Exception {\n assertThat(response1.getMatches()[0].getId().string(), equalTo(\"1\"));\n }\n \n- public void testFailNicelyWithInnerHits() throws Exception {\n- XContentBuilder mapping = XContentFactory.jsonBuilder().startObject()\n- .startObject(\"mapping\")\n- .startObject(\"properties\")\n- .startObject(\"nested\")\n- .field(\"type\", \"nested\")\n- .startObject(\"properties\")\n- .startObject(\"name\")\n- .field(\"type\", \"text\")\n- .endObject()\n- .endObject()\n- .endObject()\n- .endObject()\n- .endObject();\n-\n- assertAcked(prepareCreate(INDEX_NAME)\n- .addMapping(TYPE_NAME, \"query\", \"type=percolator\")\n- .addMapping(\"mapping\", mapping));\n- try {\n- client().prepareIndex(INDEX_NAME, TYPE_NAME, \"1\")\n- .setSource(jsonBuilder().startObject().field(\"query\", nestedQuery(\"nested\", matchQuery(\"nested.name\", \"value\"), ScoreMode.Avg).innerHit(new InnerHitBuilder())).endObject())\n- .execute().actionGet();\n- fail(\"Expected a parse error, because inner_hits isn't supported in the percolate api\");\n- } catch (Exception e) {\n- assertThat(e.getCause(), instanceOf(QueryShardException.class));\n- assertThat(e.getCause().getMessage(), containsString(\"inner_hits unsupported\"));\n- }\n- }\n-\n public void testParentChild() throws Exception {\n // We don't fail p/c queries, but those queries are unusable because only a single document can be provided in\n // the percolate api", "filename": "core/src/test/java/org/elasticsearch/percolator/PercolatorIT.java", "status": "modified" }, { "diff": "@@ -52,8 +52,6 @@\n import org.elasticsearch.index.query.EmptyQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.index.query.QueryParseContext;\n-import org.elasticsearch.index.query.support.InnerHitBuilderTests;\n-import org.elasticsearch.index.query.support.InnerHitsBuilder;\n import org.elasticsearch.indices.IndicesModule;\n import org.elasticsearch.indices.breaker.CircuitBreakerService;\n import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n@@ -410,14 +408,6 @@ protected final SearchSourceBuilder createSearchSourceBuilder() throws IOExcepti\n if (randomBoolean()) {\n builder.suggest(SuggestBuilderTests.randomSuggestBuilder());\n }\n- if (randomBoolean()) {\n- InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n- int num = randomIntBetween(0, 3);\n- for (int i = 0; i < num; i++) {\n- innerHitsBuilder.addInnerHit(randomAsciiOfLengthBetween(5, 20), InnerHitBuilderTests.randomInnerHits());\n- }\n- builder.innerHits(innerHitsBuilder);\n- }\n if (randomBoolean()) {\n int numRescores = randomIntBetween(1, 5);\n for (int i = 0; i < numRescores; i++) {", "filename": "core/src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTests.java", "status": "modified" }, { "diff": "@@ -22,14 +22,11 @@\n import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.util.ArrayUtil;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n-import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.BoolQueryBuilder;\n-import org.elasticsearch.index.query.MatchAllQueryBuilder;\n-import org.elasticsearch.index.query.support.InnerHitBuilder;\n-import org.elasticsearch.index.query.support.InnerHitsBuilder;\n+import org.elasticsearch.index.query.InnerHitBuilder;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.script.MockScriptEngine;\n import org.elasticsearch.script.Script;\n@@ -68,8 +65,6 @@\n import static org.hamcrest.Matchers.notNullValue;\n import static org.hamcrest.Matchers.nullValue;\n \n-/**\n- */\n public class InnerHitsIT extends ESIntegTestCase {\n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n@@ -112,105 +107,62 @@ public void testSimpleNested() throws Exception {\n .endObject()));\n indexRandom(true, requests);\n \n- InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n- .setNestedPath(\"comments\")\n- .setQuery(matchQuery(\"comments.message\", \"fox\"))\n- );\n- // Inner hits can be defined in two ways: 1) with the query 2) as separate inner_hit definition\n- SearchRequest[] searchRequests = new SearchRequest[]{\n- client().prepareSearch(\"articles\").setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg).innerHit(\n- new InnerHitBuilder().setName(\"comment\"))).request(),\n- client().prepareSearch(\"articles\").setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg))\n- .innerHits(innerHitsBuilder).request()\n- };\n- for (SearchRequest searchRequest : searchRequests) {\n- SearchResponse response = client().search(searchRequest).actionGet();\n- assertNoFailures(response);\n- assertHitCount(response, 1);\n- assertSearchHit(response, 1, hasId(\"1\"));\n- assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n- SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n- assertThat(innerHits.totalHits(), equalTo(2L));\n- assertThat(innerHits.getHits().length, equalTo(2));\n- assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n- assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n- assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat(innerHits.getAt(1).getId(), equalTo(\"1\"));\n- assertThat(innerHits.getAt(1).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n- assertThat(innerHits.getAt(1).getNestedIdentity().getOffset(), equalTo(1));\n- }\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg)\n+ .innerHit(new InnerHitBuilder().setName(\"comment\"))\n+ ).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertSearchHit(response, 1, hasId(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n+ assertThat(innerHits.totalHits(), equalTo(2L));\n+ assertThat(innerHits.getHits().length, equalTo(2));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(1).getId(), equalTo(\"1\"));\n+ assertThat(innerHits.getAt(1).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(innerHits.getAt(1).getNestedIdentity().getOffset(), equalTo(1));\n \n- innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n- .setQuery(matchQuery(\"comments.message\", \"elephant\")).setNestedPath(\"comments\")\n- );\n- // Inner hits can be defined in two ways: 1) with the query 2) as\n- // separate inner_hit definition\n- searchRequests = new SearchRequest[] {\n- client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\"), ScoreMode.Avg))\n- .innerHits(innerHitsBuilder).request(),\n- client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\"), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"comment\"))).request(),\n- client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\"), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"comment\").addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)))).request()\n- };\n- for (SearchRequest searchRequest : searchRequests) {\n- SearchResponse response = client().search(searchRequest).actionGet();\n- assertNoFailures(response);\n- assertHitCount(response, 1);\n- assertSearchHit(response, 1, hasId(\"2\"));\n- assertThat(response.getHits().getAt(0).getShard(), notNullValue());\n- assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n- SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n- assertThat(innerHits.totalHits(), equalTo(3L));\n- assertThat(innerHits.getHits().length, equalTo(3));\n- assertThat(innerHits.getAt(0).getId(), equalTo(\"2\"));\n- assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n- assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat(innerHits.getAt(1).getId(), equalTo(\"2\"));\n- assertThat(innerHits.getAt(1).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n- assertThat(innerHits.getAt(1).getNestedIdentity().getOffset(), equalTo(1));\n- assertThat(innerHits.getAt(2).getId(), equalTo(\"2\"));\n- assertThat(innerHits.getAt(2).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n- assertThat(innerHits.getAt(2).getNestedIdentity().getOffset(), equalTo(2));\n- }\n- InnerHitBuilder innerHit = new InnerHitBuilder();\n- innerHit.setNestedPath(\"comments\");\n- innerHit.setQuery(matchQuery(\"comments.message\", \"fox\"));\n- innerHit.setHighlightBuilder(new HighlightBuilder().field(\"comments.message\"));\n- innerHit.setExplain(true);\n- innerHit.addFieldDataField(\"comments.message\");\n- innerHit.addScriptField(\"script\", new Script(\"5\", ScriptService.ScriptType.INLINE, MockScriptEngine.NAME, Collections.emptyMap()));\n- innerHit.setSize(1);\n- innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comments\", innerHit);\n- searchRequests = new SearchRequest[] {\n- client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg))\n- .innerHits(innerHitsBuilder).request(),\n- client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg).innerHit(\n- new InnerHitBuilder().setHighlightBuilder(new HighlightBuilder().field(\"comments.message\"))\n- .setExplain(true)\n- .addFieldDataField(\"comments.message\")\n- .addScriptField(\"script\", new Script(\"5\", ScriptService.ScriptType.INLINE, MockScriptEngine.NAME, Collections.emptyMap()))\n- .setSize(1)\n- )).request()\n- };\n-\n- for (SearchRequest searchRequest : searchRequests) {\n- SearchResponse response = client().search(searchRequest).actionGet();\n- assertNoFailures(response);\n- SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comments\");\n- assertThat(innerHits.getTotalHits(), equalTo(2L));\n- assertThat(innerHits.getHits().length, equalTo(1));\n- assertThat(innerHits.getAt(0).getHighlightFields().get(\"comments.message\").getFragments()[0].string(), equalTo(\"<em>fox</em> eat quick\"));\n- assertThat(innerHits.getAt(0).explanation().toString(), containsString(\"weight(comments.message:fox in\"));\n- assertThat(innerHits.getAt(0).getFields().get(\"comments.message\").getValue().toString(), equalTo(\"eat\"));\n- assertThat(innerHits.getAt(0).getFields().get(\"script\").getValue().toString(), equalTo(\"5\"));\n- }\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\"), ScoreMode.Avg)\n+ .innerHit(new InnerHitBuilder().setName(\"comment\"))\n+ ).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertSearchHit(response, 1, hasId(\"2\"));\n+ assertThat(response.getHits().getAt(0).getShard(), notNullValue());\n+ assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n+ assertThat(innerHits.totalHits(), equalTo(3L));\n+ assertThat(innerHits.getHits().length, equalTo(3));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"2\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(1).getId(), equalTo(\"2\"));\n+ assertThat(innerHits.getAt(1).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(innerHits.getAt(1).getNestedIdentity().getOffset(), equalTo(1));\n+ assertThat(innerHits.getAt(2).getId(), equalTo(\"2\"));\n+ assertThat(innerHits.getAt(2).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(innerHits.getAt(2).getNestedIdentity().getOffset(), equalTo(2));\n+\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg).innerHit(\n+ new InnerHitBuilder().setHighlightBuilder(new HighlightBuilder().field(\"comments.message\"))\n+ .setExplain(true)\n+ .addFieldDataField(\"comments.message\")\n+ .addScriptField(\"script\", new Script(\"5\", ScriptService.ScriptType.INLINE, MockScriptEngine.NAME, Collections.emptyMap()))\n+ .setSize(1)\n+ )).get();\n+ assertNoFailures(response);\n+ innerHits = response.getHits().getAt(0).getInnerHits().get(\"comments\");\n+ assertThat(innerHits.getTotalHits(), equalTo(2L));\n+ assertThat(innerHits.getHits().length, equalTo(1));\n+ assertThat(innerHits.getAt(0).getHighlightFields().get(\"comments.message\").getFragments()[0].string(), equalTo(\"<em>fox</em> eat quick\"));\n+ assertThat(innerHits.getAt(0).explanation().toString(), containsString(\"weight(comments.message:fox in\"));\n+ assertThat(innerHits.getAt(0).getFields().get(\"comments.message\").getValue().toString(), equalTo(\"eat\"));\n+ assertThat(innerHits.getAt(0).getFields().get(\"script\").getValue().toString(), equalTo(\"5\"));\n }\n \n public void testRandomNested() throws Exception {\n@@ -237,38 +189,16 @@ public void testRandomNested() throws Exception {\n indexRandom(true, requestBuilders);\n \n int size = randomIntBetween(0, numDocs);\n- SearchResponse searchResponse;\n- if (randomBoolean()) {\n- InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"a\", new InnerHitBuilder().setNestedPath(\"field1\")\n- // Sort order is DESC, because we reverse the inner objects during indexing!\n- .addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)).setSize(size));\n- innerHitsBuilder.addInnerHit(\"b\", new InnerHitBuilder().setNestedPath(\"field2\")\n- .addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)).setSize(size));\n- searchResponse = client().prepareSearch(\"idx\")\n- .setSize(numDocs)\n- .addSort(\"_uid\", SortOrder.ASC)\n- .innerHits(innerHitsBuilder)\n- .get();\n- } else {\n- BoolQueryBuilder boolQuery = new BoolQueryBuilder();\n- if (randomBoolean()) {\n- boolQuery.should(nestedQuery(\"field1\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"a\").setSize(size)\n- .addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC))));\n- boolQuery.should(nestedQuery(\"field2\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"b\")\n- .addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)).setSize(size)));\n- } else {\n- boolQuery.should(constantScoreQuery(nestedQuery(\"field1\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"a\")\n- .setSize(size).addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)))));\n- boolQuery.should(constantScoreQuery(nestedQuery(\"field2\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"b\")\n- .setSize(size).addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)))));\n- }\n- searchResponse = client().prepareSearch(\"idx\")\n- .setQuery(boolQuery)\n- .setSize(numDocs)\n- .addSort(\"_uid\", SortOrder.ASC)\n- .get();\n- }\n+ BoolQueryBuilder boolQuery = new BoolQueryBuilder();\n+ boolQuery.should(nestedQuery(\"field1\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"a\").setSize(size)\n+ .addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC))));\n+ boolQuery.should(nestedQuery(\"field2\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"b\")\n+ .addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)).setSize(size)));\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\")\n+ .setQuery(boolQuery)\n+ .setSize(numDocs)\n+ .addSort(\"_uid\", SortOrder.ASC)\n+ .get();\n \n assertNoFailures(searchResponse);\n assertHitCount(searchResponse, numDocs);\n@@ -313,102 +243,59 @@ public void testSimpleParentChild() throws Exception {\n requests.add(client().prepareIndex(\"articles\", \"comment\", \"6\").setParent(\"2\").setSource(\"message\", \"elephant scared by mice x y\"));\n indexRandom(true, requests);\n \n- InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder().setParentChildType(\"comment\")\n- .setQuery(matchQuery(\"message\", \"fox\")));\n- SearchRequest[] searchRequests = new SearchRequest[]{\n- client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None))\n- .innerHits(innerHitsBuilder)\n- .request(),\n- client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"comment\")))\n- .request()\n- };\n- for (SearchRequest searchRequest : searchRequests) {\n- SearchResponse response = client().search(searchRequest).actionGet();\n- assertNoFailures(response);\n- assertHitCount(response, 1);\n- assertSearchHit(response, 1, hasId(\"1\"));\n- assertThat(response.getHits().getAt(0).getShard(), notNullValue());\n-\n- assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n- SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n- assertThat(innerHits.totalHits(), equalTo(2L));\n-\n- assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n- assertThat(innerHits.getAt(0).type(), equalTo(\"comment\"));\n- assertThat(innerHits.getAt(1).getId(), equalTo(\"2\"));\n- assertThat(innerHits.getAt(1).type(), equalTo(\"comment\"));\n- }\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None).innerHit(new InnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertSearchHit(response, 1, hasId(\"1\"));\n+ assertThat(response.getHits().getAt(0).getShard(), notNullValue());\n \n- innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder().setParentChildType(\"comment\")\n- .setQuery(matchQuery(\"message\", \"elephant\")));\n- searchRequests = new SearchRequest[] {\n- client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"elephant\"), ScoreMode.None))\n- .innerHits(innerHitsBuilder)\n- .request(),\n- client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"elephant\"), ScoreMode.None).innerHit(new InnerHitBuilder()))\n- .request()\n- };\n- for (SearchRequest searchRequest : searchRequests) {\n- SearchResponse response = client().search(searchRequest).actionGet();\n- assertNoFailures(response);\n- assertHitCount(response, 1);\n- assertSearchHit(response, 1, hasId(\"2\"));\n-\n- assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n- SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n- assertThat(innerHits.totalHits(), equalTo(3L));\n-\n- assertThat(innerHits.getAt(0).getId(), equalTo(\"4\"));\n- assertThat(innerHits.getAt(0).type(), equalTo(\"comment\"));\n- assertThat(innerHits.getAt(1).getId(), equalTo(\"5\"));\n- assertThat(innerHits.getAt(1).type(), equalTo(\"comment\"));\n- assertThat(innerHits.getAt(2).getId(), equalTo(\"6\"));\n- assertThat(innerHits.getAt(2).type(), equalTo(\"comment\"));\n- }\n- InnerHitBuilder innerHit = new InnerHitBuilder();\n- innerHit.setQuery(matchQuery(\"message\", \"fox\"));\n- innerHit.setParentChildType(\"comment\");\n- innerHit.setHighlightBuilder(new HighlightBuilder().field(\"message\"));\n- innerHit.setExplain(true);\n- innerHit.addFieldDataField(\"message\");\n- innerHit.addScriptField(\"script\", new Script(\"5\", ScriptService.ScriptType.INLINE, MockScriptEngine.NAME, Collections.emptyMap()));\n- innerHit.setSize(1);\n- innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comment\", innerHit);\n- searchRequests = new SearchRequest[] {\n- client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None))\n- .innerHits(innerHitsBuilder)\n- .request(),\n-\n- client().prepareSearch(\"articles\")\n- .setQuery(\n- hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None).innerHit(\n- new InnerHitBuilder()\n- .addFieldDataField(\"message\")\n- .setHighlightBuilder(new HighlightBuilder().field(\"message\"))\n- .setExplain(true).setSize(1)\n- .addScriptField(\"script\", new Script(\"5\", ScriptService.ScriptType.INLINE,\n- MockScriptEngine.NAME, Collections.emptyMap()))\n- )\n- ).request() };\n-\n- for (SearchRequest searchRequest : searchRequests) {\n- SearchResponse response = client().search(searchRequest).actionGet();\n- assertNoFailures(response);\n- SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n- assertThat(innerHits.getHits().length, equalTo(1));\n- assertThat(innerHits.getAt(0).getHighlightFields().get(\"message\").getFragments()[0].string(), equalTo(\"<em>fox</em> eat quick\"));\n- assertThat(innerHits.getAt(0).explanation().toString(), containsString(\"weight(message:fox\"));\n- assertThat(innerHits.getAt(0).getFields().get(\"message\").getValue().toString(), equalTo(\"eat\"));\n- assertThat(innerHits.getAt(0).getFields().get(\"script\").getValue().toString(), equalTo(\"5\"));\n- }\n+ assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n+ assertThat(innerHits.totalHits(), equalTo(2L));\n+\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n+ assertThat(innerHits.getAt(0).type(), equalTo(\"comment\"));\n+ assertThat(innerHits.getAt(1).getId(), equalTo(\"2\"));\n+ assertThat(innerHits.getAt(1).type(), equalTo(\"comment\"));\n+\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"elephant\"), ScoreMode.None).innerHit(new InnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertSearchHit(response, 1, hasId(\"2\"));\n+\n+ assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n+ assertThat(innerHits.totalHits(), equalTo(3L));\n+\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"4\"));\n+ assertThat(innerHits.getAt(0).type(), equalTo(\"comment\"));\n+ assertThat(innerHits.getAt(1).getId(), equalTo(\"5\"));\n+ assertThat(innerHits.getAt(1).type(), equalTo(\"comment\"));\n+ assertThat(innerHits.getAt(2).getId(), equalTo(\"6\"));\n+ assertThat(innerHits.getAt(2).type(), equalTo(\"comment\"));\n+\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(\n+ hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None).innerHit(\n+ new InnerHitBuilder()\n+ .addFieldDataField(\"message\")\n+ .setHighlightBuilder(new HighlightBuilder().field(\"message\"))\n+ .setExplain(true).setSize(1)\n+ .addScriptField(\"script\", new Script(\"5\", ScriptService.ScriptType.INLINE,\n+ MockScriptEngine.NAME, Collections.emptyMap()))\n+ )\n+ ).get();\n+ assertNoFailures(response);\n+ innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n+ assertThat(innerHits.getHits().length, equalTo(1));\n+ assertThat(innerHits.getAt(0).getHighlightFields().get(\"message\").getFragments()[0].string(), equalTo(\"<em>fox</em> eat quick\"));\n+ assertThat(innerHits.getAt(0).explanation().toString(), containsString(\"weight(message:fox\"));\n+ assertThat(innerHits.getAt(0).getFields().get(\"message\").getValue().toString(), equalTo(\"eat\"));\n+ assertThat(innerHits.getAt(0).getFields().get(\"script\").getValue().toString(), equalTo(\"5\"));\n }\n \n public void testRandomParentChild() throws Exception {\n@@ -442,33 +329,17 @@ public void testRandomParentChild() throws Exception {\n indexRandom(true, requestBuilders);\n \n int size = randomIntBetween(0, numDocs);\n- InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"a\", new InnerHitBuilder().setParentChildType(\"child1\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size));\n- innerHitsBuilder.addInnerHit(\"b\", new InnerHitBuilder().setParentChildType(\"child2\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size));\n- SearchResponse searchResponse;\n- if (randomBoolean()) {\n- searchResponse = client().prepareSearch(\"idx\")\n- .setSize(numDocs)\n- .setTypes(\"parent\")\n- .addSort(\"_uid\", SortOrder.ASC)\n- .innerHits(innerHitsBuilder)\n- .get();\n- } else {\n- BoolQueryBuilder boolQuery = new BoolQueryBuilder();\n- if (randomBoolean()) {\n- boolQuery.should(hasChildQuery(\"child1\", matchAllQuery(), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"a\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size)));\n- boolQuery.should(hasChildQuery(\"child2\", matchAllQuery(), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"b\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size)));\n- } else {\n- boolQuery.should(constantScoreQuery(hasChildQuery(\"child1\", matchAllQuery(), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"a\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size))));\n- boolQuery.should(constantScoreQuery(hasChildQuery(\"child2\", matchAllQuery(), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"b\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size))));\n- }\n- searchResponse = client().prepareSearch(\"idx\")\n- .setSize(numDocs)\n- .setTypes(\"parent\")\n- .addSort(\"_uid\", SortOrder.ASC)\n- .setQuery(boolQuery)\n- .get();\n- }\n+ BoolQueryBuilder boolQuery = new BoolQueryBuilder();\n+ boolQuery.should(constantScoreQuery(hasChildQuery(\"child1\", matchAllQuery(), ScoreMode.None)\n+ .innerHit(new InnerHitBuilder().setName(\"a\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size))));\n+ boolQuery.should(constantScoreQuery(hasChildQuery(\"child2\", matchAllQuery(), ScoreMode.None)\n+ .innerHit(new InnerHitBuilder().setName(\"b\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size))));\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\")\n+ .setSize(numDocs)\n+ .setTypes(\"parent\")\n+ .addSort(\"_uid\", SortOrder.ASC)\n+ .setQuery(boolQuery)\n+ .get();\n \n assertNoFailures(searchResponse);\n assertHitCount(searchResponse, numDocs);\n@@ -560,19 +431,10 @@ public void testParentChildMultipleLayers() throws Exception {\n requests.add(client().prepareIndex(\"articles\", \"remark\", \"2\").setParent(\"2\").setRouting(\"2\").setSource(\"message\", \"bad\"));\n indexRandom(true, requests);\n \n- InnerHitsBuilder innerInnerHitsBuilder = new InnerHitsBuilder();\n- innerInnerHitsBuilder.addInnerHit(\"remark\", new InnerHitBuilder()\n- .setParentChildType(\"remark\")\n- .setQuery(matchQuery(\"message\", \"good\"))\n- );\n- InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n- .setParentChildType(\"comment\")\n- .setQuery(hasChildQuery(\"remark\", matchQuery(\"message\", \"good\"), ScoreMode.None))\n- .setInnerHitsBuilder(innerInnerHitsBuilder));\n SearchResponse response = client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", hasChildQuery(\"remark\", matchQuery(\"message\", \"good\"), ScoreMode.None), ScoreMode.None))\n- .innerHits(innerHitsBuilder)\n+ .setQuery(hasChildQuery(\"comment\",\n+ hasChildQuery(\"remark\", matchQuery(\"message\", \"good\"), ScoreMode.None).innerHit(new InnerHitBuilder()),\n+ ScoreMode.None).innerHit(new InnerHitBuilder()))\n .get();\n \n assertNoFailures(response);\n@@ -590,18 +452,10 @@ public void testParentChildMultipleLayers() throws Exception {\n assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n assertThat(innerHits.getAt(0).type(), equalTo(\"remark\"));\n \n- innerInnerHitsBuilder = new InnerHitsBuilder();\n- innerInnerHitsBuilder.addInnerHit(\"remark\", new InnerHitBuilder()\n- .setParentChildType(\"remark\")\n- .setQuery(matchQuery(\"message\", \"bad\")));\n- innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n- .setParentChildType(\"comment\")\n- .setQuery(hasChildQuery(\"remark\", matchQuery(\"message\", \"bad\"), ScoreMode.None))\n- .setInnerHitsBuilder(innerInnerHitsBuilder));\n response = client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", hasChildQuery(\"remark\", matchQuery(\"message\", \"bad\"), ScoreMode.None), ScoreMode.None))\n- .innerHits(innerHitsBuilder)\n+ .setQuery(hasChildQuery(\"comment\",\n+ hasChildQuery(\"remark\", matchQuery(\"message\", \"bad\"), ScoreMode.None).innerHit(new InnerHitBuilder()),\n+ ScoreMode.None).innerHit(new InnerHitBuilder()))\n .get();\n \n assertNoFailures(response);\n@@ -662,24 +516,18 @@ public void testNestedMultipleLayers() throws Exception {\n .endObject()));\n indexRandom(true, requests);\n \n- InnerHitsBuilder innerInnerHitsBuilder = new InnerHitsBuilder();\n- innerInnerHitsBuilder.addInnerHit(\"remark\", new InnerHitBuilder()\n- .setNestedPath(\"comments.remarks\")\n- .setQuery(matchQuery(\"comments.remarks.message\", \"good\")));\n- InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n- .setNestedPath(\"comments\")\n- .setQuery(nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"good\"), ScoreMode.Avg))\n- .setInnerHitsBuilder(innerInnerHitsBuilder)\n- );\n SearchResponse response = client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"good\"), ScoreMode.Avg), ScoreMode.Avg))\n- .innerHits(innerHitsBuilder).get();\n+ .setQuery(\n+ nestedQuery(\"comments\",\n+ nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"good\"), ScoreMode.Avg)\n+ .innerHit(new InnerHitBuilder().setName(\"remark\")),\n+ ScoreMode.Avg).innerHit(new InnerHitBuilder())\n+ ).get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n assertSearchHit(response, 1, hasId(\"1\"));\n assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n- SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n+ SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"comments\");\n assertThat(innerHits.totalHits(), equalTo(1L));\n assertThat(innerHits.getHits().length, equalTo(1));\n assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n@@ -711,24 +559,18 @@ public void testNestedMultipleLayers() throws Exception {\n assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"remarks\"));\n assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n \n- innerInnerHitsBuilder = new InnerHitsBuilder();\n- innerInnerHitsBuilder.addInnerHit(\"remark\", new InnerHitBuilder()\n- .setNestedPath(\"comments.remarks\")\n- .setQuery(matchQuery(\"comments.remarks.message\", \"bad\")));\n- innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n- .setNestedPath(\"comments\")\n- .setQuery(nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"bad\"), ScoreMode.Avg))\n- .setInnerHitsBuilder(innerInnerHitsBuilder));\n response = client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"bad\"), ScoreMode.Avg), ScoreMode.Avg))\n- .innerHits(innerHitsBuilder)\n- .get();\n+ .setQuery(\n+ nestedQuery(\"comments\",\n+ nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"bad\"), ScoreMode.Avg)\n+ .innerHit(new InnerHitBuilder().setName(\"remark\")),\n+ ScoreMode.Avg).innerHit(new InnerHitBuilder())\n+ ).get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n assertSearchHit(response, 1, hasId(\"2\"));\n assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n- innerHits = response.getHits().getAt(0).getInnerHits().get(\"comment\");\n+ innerHits = response.getHits().getAt(0).getInnerHits().get(\"comments\");\n assertThat(innerHits.totalHits(), equalTo(1L));\n assertThat(innerHits.getHits().length, equalTo(1));\n assertThat(innerHits.getAt(0).getId(), equalTo(\"2\"));\n@@ -863,22 +705,21 @@ public void testRoyals() throws Exception {\n requests.add(client().prepareIndex(\"royals\", \"baron\", \"baron4\").setParent(\"earl4\").setRouting(\"king\").setSource(\"{}\"));\n indexRandom(true, requests);\n \n- InnerHitsBuilder innerInnerHitsBuilder = new InnerHitsBuilder();\n- innerInnerHitsBuilder.addInnerHit(\"barons\", new InnerHitBuilder().setParentChildType(\"baron\"));\n- InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"earls\", new InnerHitBuilder()\n- .setParentChildType(\"earl\")\n- .addSort(SortBuilders.fieldSort(\"_uid\").order(SortOrder.ASC))\n- .setSize(4)\n- .setInnerHitsBuilder(innerInnerHitsBuilder)\n- );\n- innerInnerHitsBuilder = new InnerHitsBuilder();\n- innerInnerHitsBuilder.addInnerHit(\"kings\", new InnerHitBuilder().setParentChildType(\"king\"));\n- innerHitsBuilder.addInnerHit(\"princes\", new InnerHitBuilder().setParentChildType(\"prince\")\n- .setInnerHitsBuilder(innerInnerHitsBuilder));\n SearchResponse response = client().prepareSearch(\"royals\")\n .setTypes(\"duke\")\n- .innerHits(innerHitsBuilder)\n+ .setQuery(boolQuery()\n+ .filter(hasParentQuery(\"prince\",\n+ hasParentQuery(\"king\", matchAllQuery(), false).innerHit(new InnerHitBuilder().setName(\"kings\")),\n+ false).innerHit(new InnerHitBuilder().setName(\"princes\"))\n+ )\n+ .filter(hasChildQuery(\"earl\",\n+ hasChildQuery(\"baron\", matchAllQuery(), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"barons\")),\n+ ScoreMode.None).innerHit(new InnerHitBuilder()\n+ .addSort(SortBuilders.fieldSort(\"_uid\").order(SortOrder.ASC))\n+ .setName(\"earls\")\n+ .setSize(4))\n+ )\n+ )\n .get();\n assertHitCount(response, 1);\n assertThat(response.getHits().getAt(0).getId(), equalTo(\"duke\"));\n@@ -1086,25 +927,4 @@ public void testDontExplode() throws Exception {\n assertHitCount(response, 1);\n }\n \n- public void testTopLevelInnerHitsWithQueryInnerHits() throws Exception {\n- // top level inner hits shouldn't overwrite query inner hits definitions\n-\n- assertAcked(prepareCreate(\"index1\").addMapping(\"child\", \"_parent\", \"type=parent\"));\n- List<IndexRequestBuilder> requests = new ArrayList<>();\n- requests.add(client().prepareIndex(\"index1\", \"parent\", \"1\").setSource(\"{}\"));\n- requests.add(client().prepareIndex(\"index1\", \"child\", \"2\").setParent(\"1\").setSource(\"{}\"));\n- indexRandom(true, requests);\n-\n- InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n- innerHitsBuilder.addInnerHit(\"my-inner-hit\", new InnerHitBuilder().setParentChildType(\"child\"));\n- SearchResponse response = client().prepareSearch(\"index1\")\n- .setQuery(hasChildQuery(\"child\", new MatchAllQueryBuilder(), ScoreMode.None).innerHit(new InnerHitBuilder()))\n- .innerHits(innerHitsBuilder)\n- .get();\n- assertHitCount(response, 1);\n- assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(2));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getAt(0).getId(), equalTo(\"2\"));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"my-inner-hit\").getAt(0).getId(), equalTo(\"2\"));\n- }\n-\n }", "filename": "core/src/test/java/org/elasticsearch/search/innerhits/InnerHitsIT.java", "status": "modified" }, { "diff": "@@ -151,5 +151,6 @@ specifying the sort order with the `order` option.\n \n ==== Inner hits\n \n-* The format of top level inner hits has been changed to be more readable. All options are now set on the same level.\n- So the `path` and `type` options are specified on the same level where `query` and other options are specified.\n+* Top level inner hits syntax has been removed. Inner hits can now only be specified as part of the `nested`,\n+`has_child` and `has_parent` queries. Use cases previously only possible with top level inner hits can now be done\n+with inner hits defined inside the query dsl.", "filename": "docs/reference/migration/migrate_5_0/search.asciidoc", "status": "modified" }, { "diff": "@@ -226,78 +226,4 @@ An example of a response snippet that could be generated from the above search r\n }\n },\n ...\n---------------------------------------------------\n-\n-[[top-level-inner-hits]]\n-==== top level inner hits\n-\n-Besides defining inner hits on query and filters, inner hits can also be defined as a top level construct alongside the\n-`query` and `aggregations` definition. The main reason for using the top level inner hits definition is to let the\n-inner hits return documents that don't match with the main query. Also inner hits definitions can be nested via the\n-top level notation. Other than that, the inner hit definition inside the query should be used because that is the most\n-compact way for defining inner hits.\n-\n-The following snippet explains the basic structure of inner hits defined at the top level of the search request body:\n-\n-[source,js]\n---------------------------------------------------\n-\"inner_hits\" : {\n- \"<inner_hits_name>\" : {\n- \"<path|type>\" : {\n- \"<path-to-nested-object-field|child-or-parent-type>\" : {\n- <inner_hits_body>\n- [,\"inner_hits\" : { [<sub_inner_hits>]+ } ]?\n- }\n- }\n- }\n- [,\"<inner_hits_name_2>\" : { ... } ]*\n-}\n---------------------------------------------------\n-\n-Inside the `inner_hits` definition, first the name of the inner hit is defined then whether the inner_hit\n-is a nested by defining `path` or a parent/child based definition by defining `type`. The next object layer contains\n-the name of the nested object field if the inner_hits is nested or the parent or child type if the inner_hit definition\n-is parent/child based.\n-\n-Multiple inner hit definitions can be defined in a single request. In the `<inner_hits_body>` any option for features\n-that `inner_hits` support can be defined. Optionally another `inner_hits` definition can be defined in the `<inner_hits_body>`.\n-\n-An example that shows the use of nested inner hits via the top level notation:\n-\n-[source,js]\n---------------------------------------------------\n-{\n- \"query\" : {\n- \"nested\" : {\n- \"path\" : \"comments\",\n- \"query\" : {\n- \"match\" : {\"comments.message\" : \"[actual query]\"}\n- }\n- }\n- },\n- \"inner_hits\" : {\n- \"comment\" : { <1>\n- \"path\" : \"comments\", <2>\n- \"query\" : {\n- \"match\" : {\"comments.message\" : \"[different query]\"} <3>\n- }\n- }\n- }\n-}\n---------------------------------------------------\n-\n-<1> The inner hit definition with the name `comment`.\n-<2> The path option refers to the nested object field `comments`\n-<3> A query that runs to collect the nested inner documents for each search hit returned. If no query is defined all nested\n- inner documents will be included belonging to a search hit. This shows that it only make sense to the top level\n- inner hit definition if no query or a different query is specified.\n-\n-Additional options that are only available when using the top level inner hits notation:\n-\n-[horizontal]\n-`path`:: Defines the nested scope where hits will be collected from.\n-`type`:: Defines the parent or child type score where hits will be collected from.\n-`query`:: Defines the query that will run in the defined nested, parent or child scope to collect and score hits. By default all document in the scope will be matched.\n-\n-Either `path` or `type` must be defined. The `path` or `type` defines the scope from where hits are fetched and\n-used as inner hits.\n+--------------------------------------------------\n\\ No newline at end of file", "filename": "docs/reference/search/request/inner-hits.asciidoc", "status": "modified" } ] }
{ "body": "**Elasticsearch version**:\nmaster\n\n**JVM version**:\n1.8.0_51\n\n**OS version**:\nOSX\n\n**Description of the problem including expected versus actual behavior**:\nI'm getting negative numbers back for \"throttled time\" in reindex/update-by-query.\n\n**Steps to reproduce**:\n\n``` sh\n#Make a pile of docs.\nfor i in $(seq 1 10000); do curl -XPOST localhost:9200/test/test -d'{\"test\": \"test\"}'; echo; done\ncurl -XPOST localhost:9200/_refresh\n\n# Kick off a reindex and loo\ncurl -XPOST 'localhost:9200/test/_update_by_query?wait_for_completion=false'\nwhile curl -XGET 'localhost:9200/_tasks/?pretty&detailed=true'; do sleep .1; done\n```\n\nThat returned throttled times that were negative numbers. Small negative numbers, like `-56` and stuff. That doesn't make any sense.\n", "comments": [ { "body": "Note that I wasn't throttling the request at all.\n", "created_at": "2016-04-15T12:36:01Z" }, { "body": "It looks like I was wrong, it is the \"throttled_until\" times:\n\n```\n \"throttled_until_millis\" : -5\n```\n\nStill, those are wrong.\n", "created_at": "2016-04-15T20:37:23Z" } ], "number": 17783, "title": "Reindex: throttle times returning negative numbers" }
{ "body": "Just clamp the value at 0. It isn't useful to tell the user \"this\nthread should have woken 5ms ago\".\n\nCloses #17783\n", "number": 17799, "review_comments": [], "title": "Reindex should never report negative throttled_until" }
{ "commits": [ { "message": "Reindex: never report negative throttled_until\n\nJust clamp the value at 0. It isn't useful to tell the user \"this\nthread should have woken 5ms ago\".\n\nCloses #17783" } ], "files": [ { "diff": "@@ -38,6 +38,7 @@\n import java.util.concurrent.atomic.AtomicLong;\n import java.util.concurrent.atomic.AtomicReference;\n \n+import static java.lang.Math.max;\n import static java.lang.Math.round;\n import static org.elasticsearch.common.unit.TimeValue.timeValueNanos;\n \n@@ -93,7 +94,7 @@ private TimeValue throttledUntil() {\n if (delayed.future == null) {\n return timeValueNanos(0);\n }\n- return timeValueNanos(delayed.future.getDelay(TimeUnit.NANOSECONDS));\n+ return timeValueNanos(max(0, delayed.future.getDelay(TimeUnit.NANOSECONDS)));\n }\n \n /**", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java", "status": "modified" }, { "diff": "@@ -29,8 +29,11 @@\n import java.util.List;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.CyclicBarrier;\n+import java.util.concurrent.Delayed;\n+import java.util.concurrent.ExecutionException;\n import java.util.concurrent.ScheduledFuture;\n import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.TimeoutException;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n import static org.elasticsearch.common.unit.TimeValue.parseTimeValue;\n@@ -206,4 +209,64 @@ public void onFailure(Throwable t) {\n }\n assertThat(errors, empty());\n }\n+\n+ public void testDelayNeverNegative() throws IOException {\n+ // Thread pool that returns a ScheduledFuture that claims to have a negative delay\n+ ThreadPool threadPool = new ThreadPool(\"test\") {\n+ public ScheduledFuture<?> schedule(TimeValue delay, String name, Runnable command) {\n+ return new ScheduledFuture<Void>() {\n+ @Override\n+ public long getDelay(TimeUnit unit) {\n+ return -1;\n+ }\n+\n+ @Override\n+ public int compareTo(Delayed o) {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public boolean cancel(boolean mayInterruptIfRunning) {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public boolean isCancelled() {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public boolean isDone() {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public Void get() throws InterruptedException, ExecutionException {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public Void get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException {\n+ throw new UnsupportedOperationException();\n+ }\n+ };\n+ }\n+ };\n+ try {\n+ // Have the task use the thread pool to delay a task that does nothing\n+ task.delayPrepareBulkRequest(threadPool, timeValueSeconds(0), new AbstractRunnable() {\n+ @Override\n+ protected void doRun() throws Exception {\n+ }\n+ @Override\n+ public void onFailure(Throwable t) {\n+ throw new UnsupportedOperationException();\n+ }\n+ });\n+ // Even though the future returns a negative delay we just return 0 because the time is up.\n+ assertEquals(timeValueSeconds(0), task.getStatus().getThrottledUntil());\n+ } finally {\n+ threadPool.shutdown();\n+ }\n+ }\n }", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/BulkByScrollTaskTests.java", "status": "modified" } ] }
{ "body": "Since we switched to points to index ip addresses, a couple things are not working anymore:\n- [x] range queries only support inclusive bounds (#17777)\n- [x] range aggregations do not work anymore (#17859)\n- [x] sorting on ip addresses fails since it tries to write binary bytes as an utf8 string when rendering sort values (#17959)\n- [x] sorting and aggregations across old and new indices do not work since the coordinating node gets longs from some shards and binary values from other shards and does not know how to reconcile them (#18593)\n- [x] terms aggregations return binary keys (#18003)\n", "comments": [], "number": 17971, "title": "Not all features work on ip fields" }
{ "body": "`ip` fields currently fail range queries when either bound is inclusive. This\ncommit makes ranges also work in the exclusive case to be consistent with other\ndata types.\n\nRelates to #17971\n", "number": 17777, "review_comments": [ { "body": "What about passing in the same value for upper and lower, and both inclusive=false. This should be an invalid range right? Do we have a check for that?\n", "created_at": "2016-04-15T15:47:39Z" } ], "title": "Add back range support to `ip` fields." }
{ "commits": [ { "message": "Add back range support to `ip` fields. #17777\n\n`ip` fields currently fail range queries when either bound is inclusive. This\ncommit makes ranges also work in the exclusive case to be consistent with other\ndata types." } ], "files": [ { "diff": "@@ -31,3 +31,5 @@ org.apache.lucene.index.IndexReader#getCombinedCoreAndDeletesKey()\n \n @defaultMessage Soon to be removed\n org.apache.lucene.document.FieldType#numericType()\n+\n+org.apache.lucene.document.InetAddressPoint#newPrefixQuery(java.lang.String, java.net.InetAddress, int) @LUCENE-7232", "filename": "buildSrc/src/main/resources/forbidden/es-all-signatures.txt", "status": "modified" }, { "diff": "@@ -0,0 +1,117 @@\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache License, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.apache.lucene.document;\n+\n+import java.net.InetAddress;\n+import java.net.UnknownHostException;\n+import java.util.Arrays;\n+\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.util.NumericUtils;\n+import org.elasticsearch.common.SuppressForbidden;\n+\n+/**\n+ * Forked utility methods from Lucene's InetAddressPoint until LUCENE-7232 and\n+ * LUCENE-7234 are released.\n+ */\n+// TODO: remove me when we upgrade to Lucene 6.1\n+@SuppressForbidden(reason=\"uses InetAddress.getHostAddress\")\n+public final class XInetAddressPoint {\n+\n+ private XInetAddressPoint() {}\n+\n+ /** The minimum value that an ip address can hold. */\n+ public static final InetAddress MIN_VALUE;\n+ /** The maximum value that an ip address can hold. */\n+ public static final InetAddress MAX_VALUE;\n+ static {\n+ MIN_VALUE = InetAddressPoint.decode(new byte[InetAddressPoint.BYTES]);\n+ byte[] maxValueBytes = new byte[InetAddressPoint.BYTES];\n+ Arrays.fill(maxValueBytes, (byte) 0xFF);\n+ MAX_VALUE = InetAddressPoint.decode(maxValueBytes);\n+ }\n+\n+ /**\n+ * Return the {@link InetAddress} that compares immediately greater than\n+ * {@code address}.\n+ * @throws ArithmeticException if the provided address is the\n+ * {@link #MAX_VALUE maximum ip address}\n+ */\n+ public static InetAddress nextUp(InetAddress address) {\n+ if (address.equals(MAX_VALUE)) {\n+ throw new ArithmeticException(\"Overflow: there is no greater InetAddress than \"\n+ + address.getHostAddress());\n+ }\n+ byte[] delta = new byte[InetAddressPoint.BYTES];\n+ delta[InetAddressPoint.BYTES-1] = 1;\n+ byte[] nextUpBytes = new byte[InetAddressPoint.BYTES];\n+ NumericUtils.add(InetAddressPoint.BYTES, 0, InetAddressPoint.encode(address), delta, nextUpBytes);\n+ return InetAddressPoint.decode(nextUpBytes);\n+ }\n+\n+ /**\n+ * Return the {@link InetAddress} that compares immediately less than\n+ * {@code address}.\n+ * @throws ArithmeticException if the provided address is the\n+ * {@link #MIN_VALUE minimum ip address}\n+ */\n+ public static InetAddress nextDown(InetAddress address) {\n+ if (address.equals(MIN_VALUE)) {\n+ throw new ArithmeticException(\"Underflow: there is no smaller InetAddress than \"\n+ + address.getHostAddress());\n+ }\n+ byte[] delta = new byte[InetAddressPoint.BYTES];\n+ delta[InetAddressPoint.BYTES-1] = 1;\n+ byte[] nextDownBytes = new byte[InetAddressPoint.BYTES];\n+ NumericUtils.subtract(InetAddressPoint.BYTES, 0, InetAddressPoint.encode(address), delta, nextDownBytes);\n+ return InetAddressPoint.decode(nextDownBytes);\n+ }\n+\n+ /** \n+ * Create a prefix query for matching a CIDR network range.\n+ *\n+ * @param field field name. must not be {@code null}.\n+ * @param value any host address\n+ * @param prefixLength the network prefix length for this address. This is also known as the subnet mask in the context of IPv4\n+ * addresses.\n+ * @throws IllegalArgumentException if {@code field} is null, or prefixLength is invalid.\n+ * @return a query matching documents with addresses contained within this network\n+ */\n+ // TODO: remove me when we upgrade to Lucene 6.0.1\n+ public static Query newPrefixQuery(String field, InetAddress value, int prefixLength) {\n+ if (value == null) {\n+ throw new IllegalArgumentException(\"InetAddress must not be null\");\n+ }\n+ if (prefixLength < 0 || prefixLength > 8 * value.getAddress().length) {\n+ throw new IllegalArgumentException(\"illegal prefixLength '\" + prefixLength\n+ + \"'. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges\");\n+ }\n+ // create the lower value by zeroing out the host portion, upper value by filling it with all ones.\n+ byte lower[] = value.getAddress();\n+ byte upper[] = value.getAddress();\n+ for (int i = prefixLength; i < 8 * lower.length; i++) {\n+ int m = 1 << (7 - (i & 7));\n+ lower[i >> 3] &= ~m;\n+ upper[i >> 3] |= m;\n+ }\n+ try {\n+ return InetAddressPoint.newRangeQuery(field, InetAddress.getByAddress(lower), InetAddress.getByAddress(upper));\n+ } catch (UnknownHostException e) {\n+ throw new AssertionError(e); // values are coming from InetAddress\n+ }\n+ }\n+}", "filename": "core/src/main/java/org/apache/lucene/document/XInetAddressPoint.java", "status": "added" }, { "diff": "@@ -23,9 +23,11 @@\n import org.apache.lucene.document.InetAddressPoint;\n import org.apache.lucene.document.SortedSetDocValuesField;\n import org.apache.lucene.document.StoredField;\n+import org.apache.lucene.document.XInetAddressPoint;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.PointValues;\n+import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.Version;\n@@ -176,7 +178,7 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) {\n if (fields.length == 2) {\n InetAddress address = InetAddresses.forString(fields[0]);\n int prefixLength = Integer.parseInt(fields[1]);\n- return InetAddressPoint.newPrefixQuery(name(), address, prefixLength);\n+ return XInetAddressPoint.newPrefixQuery(name(), address, prefixLength);\n } else {\n throw new IllegalArgumentException(\"Expected [ip/prefix] but was [\" + term + \"]\");\n }\n@@ -188,24 +190,30 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) {\n \n @Override\n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n- if (includeLower == false || includeUpper == false) {\n- // TODO: should we drop range support entirely\n- throw new IllegalArgumentException(\"range on ip addresses only supports inclusive bounds\");\n- }\n InetAddress lower;\n if (lowerTerm == null) {\n- lower = InetAddressPoint.decode(new byte[16]);\n+ lower = XInetAddressPoint.MIN_VALUE;\n } else {\n lower = parse(lowerTerm);\n+ if (includeLower == false) {\n+ if (lower.equals(XInetAddressPoint.MAX_VALUE)) {\n+ return new MatchNoDocsQuery();\n+ }\n+ lower = XInetAddressPoint.nextUp(lower);\n+ }\n }\n \n InetAddress upper;\n if (upperTerm == null) {\n- byte[] bytes = new byte[16];\n- Arrays.fill(bytes, (byte) 255); \n- upper = InetAddressPoint.decode(bytes);\n+ upper = XInetAddressPoint.MAX_VALUE;\n } else {\n upper = parse(upperTerm);\n+ if (includeUpper == false) {\n+ if (upper.equals(XInetAddressPoint.MIN_VALUE)) {\n+ return new MatchNoDocsQuery();\n+ }\n+ upper = XInetAddressPoint.nextDown(upper);\n+ }\n }\n \n return InetAddressPoint.newRangeQuery(name(), lower, upper);\n@@ -215,7 +223,7 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower\n public Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions) {\n InetAddress base = parse(value);\n int mask = fuzziness.asInt();\n- return InetAddressPoint.newPrefixQuery(name(), base, mask);\n+ return XInetAddressPoint.newPrefixQuery(name(), base, mask);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java", "status": "modified" }, { "diff": "@@ -21,6 +21,8 @@\n import java.net.InetAddress;\n \n import org.apache.lucene.document.InetAddressPoint;\n+import org.apache.lucene.document.XInetAddressPoint;\n+import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n@@ -66,10 +68,93 @@ public void testTermQuery() {\n \n ip = \"2001:db8::2:1\";\n String prefix = ip + \"/64\";\n- assertEquals(InetAddressPoint.newPrefixQuery(\"field\", InetAddresses.forString(ip), 64), ft.termQuery(prefix, null));\n+ assertEquals(XInetAddressPoint.newPrefixQuery(\"field\", InetAddresses.forString(ip), 64), ft.termQuery(prefix, null));\n \n ip = \"192.168.1.7\";\n prefix = ip + \"/16\";\n- assertEquals(InetAddressPoint.newPrefixQuery(\"field\", InetAddresses.forString(ip), 16), ft.termQuery(prefix, null));\n+ assertEquals(XInetAddressPoint.newPrefixQuery(\"field\", InetAddresses.forString(ip), 16), ft.termQuery(prefix, null));\n+ }\n+\n+ public void testRangeQuery() {\n+ MappedFieldType ft = createDefaultFieldType();\n+ ft.setName(\"field\");\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"::\"),\n+ XInetAddressPoint.MAX_VALUE),\n+ ft.rangeQuery(null, null, randomBoolean(), randomBoolean()));\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"::\"),\n+ InetAddresses.forString(\"192.168.2.0\")),\n+ ft.rangeQuery(null, \"192.168.2.0\", randomBoolean(), true));\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"::\"),\n+ InetAddresses.forString(\"192.168.1.255\")),\n+ ft.rangeQuery(null, \"192.168.2.0\", randomBoolean(), false));\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"2001:db8::\"),\n+ XInetAddressPoint.MAX_VALUE),\n+ ft.rangeQuery(\"2001:db8::\", null, true, randomBoolean()));\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"2001:db8::1\"),\n+ XInetAddressPoint.MAX_VALUE),\n+ ft.rangeQuery(\"2001:db8::\", null, false, randomBoolean()));\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"2001:db8::\"),\n+ InetAddresses.forString(\"2001:db8::ffff\")),\n+ ft.rangeQuery(\"2001:db8::\", \"2001:db8::ffff\", true, true));\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"2001:db8::1\"),\n+ InetAddresses.forString(\"2001:db8::fffe\")),\n+ ft.rangeQuery(\"2001:db8::\", \"2001:db8::ffff\", false, false));\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"2001:db8::2\"),\n+ InetAddresses.forString(\"2001:db8::\")),\n+ // same lo/hi values but inclusive=false so this won't match anything\n+ ft.rangeQuery(\"2001:db8::1\", \"2001:db8::1\", false, false));\n+\n+ // Upper bound is the min IP and is not inclusive\n+ assertEquals(new MatchNoDocsQuery(),\n+ ft.rangeQuery(\"::\", \"::\", true, false));\n+\n+ // Lower bound is the max IP and is not inclusive\n+ assertEquals(new MatchNoDocsQuery(),\n+ ft.rangeQuery(\"ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff\", \"ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff\", false, true));\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"::\"),\n+ InetAddresses.forString(\"::fffe:ffff:ffff\")),\n+ // same lo/hi values but inclusive=false so this won't match anything\n+ ft.rangeQuery(\"::\", \"0.0.0.0\", true, false));\n+\n+ assertEquals(\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"::1:0:0:0\"),\n+ XInetAddressPoint.MAX_VALUE),\n+ // same lo/hi values but inclusive=false so this won't match anything\n+ ft.rangeQuery(\"255.255.255.255\", \"ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff\", false, true));\n+\n+ assertEquals(\n+ // lower bound is ipv4, upper bound is ipv6\n+ InetAddressPoint.newRangeQuery(\"field\",\n+ InetAddresses.forString(\"192.168.1.7\"),\n+ InetAddresses.forString(\"2001:db8::\")),\n+ ft.rangeQuery(\"::ffff:c0a8:107\", \"2001:db8::\", true, true));\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/ip/IpFieldTypeTests.java", "status": "modified" } ] }
{ "body": "using elasticsearch 1.5.2 I can index values greater than 127 for a byte field without getting any error. The values just overflow internally and the resulting numbers can be searched for.\ne.g. indexing 250 ends up as -6\nindexing 260 ends up as 4\n\n```\ncurl -XPUT localhost:9200/test -d '\n{\n \"mappings\" : {\n \"foo\" : {\n \"properties\" : {\n \"field1\" : { \"type\" : \"byte\" }\n }\n }\n }\n}'\n\ncurl -XPUT localhost:9200/test/foo/one -d '\n{\n \"field1\" : 127\n}'\n\ncurl -XPUT localhost:9200/test/foo/two -d '\n{\n \"field1\" : 250\n}'\n\ncurl -XPUT localhost:9200/test/foo/three -d '\n{\n \"field1\" : 260\n}'\n```\n\nnow searching like this...\n\n```\ncurl -XGET localhost:9200/test/_search?search_type=count -d '\n{\n \"aggs\" : {\n \"values\" : {\n \"terms\" : { \"field\" : \"field1\" }\n }\n }\n}'\n```\n\nreturns the following resultset\n\n```\n{\n \"took\": 3,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 3,\n \"max_score\": 0,\n \"hits\": []\n },\n \"aggregations\": {\n \"values\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": -6,\n \"doc_count\": 1\n },\n {\n \"key\": 4,\n \"doc_count\": 1\n },\n {\n \"key\": 127,\n \"doc_count\": 1\n }\n ]\n }\n }\n}\n```\n\ntrying the same with a 'short' number, you get an error message for document 'two'\n\n```\ncurl -XPUT localhost:9200/test -d '\n{\n \"mappings\" : {\n \"foo\" : {\n \"properties\" : {\n \"field1\" : { \"type\" : \"short\" }\n }\n }\n }\n}'\n\n\ncurl -XPUT localhost:9200/test/foo/one -d '\n{\n \"field1\" : 31000\n}'\n\ncurl -XPUT localhost:9200/test/foo/two -d '\n{\n \"field1\" : 33000\n}'\n```\n\n```\n{\n \"error\": \"RemoteTransportException[[dev-test-mon-es-02][inet[/192.168.110.83:9300]][indices:data/write/index]]; nested: MapperParsingException[failed to parse [field1]]; nested: JsonParseException[Numeric value (33000) out of range of Java short\\n at [Source: UNKNOWN; line: 2, column: 19]]; \",\n \"status\": 400\n}\n```\n", "comments": [ { "body": "Even with `coerce:false`, the out-of-range byte value doesn't fail, and the short shouldn't fail with `coerce:true`, but does...\n", "created_at": "2015-06-05T14:12:17Z" }, { "body": "Fields mapped as byte actually get read as short and then cast to byte. E.g. if you map the field as byte and send in the 33000 value, you get the same error (e.g. out of range for **short**), e.g.:\n\n```\nat org.elasticsearch.common.xcontent.support.AbstractXContentParser.shortValue(AbstractXContentParser.java:108)\nat org.elasticsearch.index.mapper.core.ByteFieldMapper.innerParseCreateField(ByteFieldMapper.java:289)\n```\n\nFor consistency, `AbstractXContentParser.byteValue()` method could be created that ultimately could call `JsonParser.getByteValue()`, which does include byte range check, assuming this behavior of the range check in the parser is desired. However, this is why coerce has no effect.\n\nCoerce works converting String values, but not for down-casting numeric values for short/byte fields, since the range check is in JsonParser in the latter case, e.g. `ensureNumberConversion()` is too late, the parser will have thrown the exception by then if the incoming value is numeric.\n\nJsonParser only checks range for short and byte (e.g. not for int). So an approach could be to always parse the value as int and then cast down for short/byte fields. `ensureNumberConversion()` would then ensure range enforcement as needed. I think this approach would be more consistent overall than relying on `JsonParser.getShortValue()`/`JsonParser.getByteValue()` for short/byte, and `ensureNumberConversion()` for int/float range check.\n\nIf you agree, I can implement this.\n\nCoerce documentations also doesn't explicitly mention the behavior of casting down, it only mentions truncating fractions, which is not exactly the same thing, strictly speaking. Maybe it should be spelled out that not only will your 10.2 be converted to 10, but also 266 will become 10 for byte fields.\n\nAnother consistency issue is that even if coerce is set and String values are accepted, fractions cause `NumberFormatException`, so while `330.12` and `\"330\"` are accepted, `\"330.12\"` is not. JsonParser explicitly checks for the `.` to decide between `long` and `double`, the same approach could be used here as well.\n", "created_at": "2015-06-16T22:40:22Z" }, { "body": "@szroland +1 this proposal makes sense to me.\n\n> If you agree, I can implement this.\n\nFeel free to ping me when you have a PR ready.\n", "created_at": "2015-06-18T13:02:00Z" }, { "body": "Looking at this a bit more I think we might want to be just a little bit careful, and make sure the semantics of coerce is fully intended. Bytes are problematic, because they are read as shorts and then casted to byte. So to illustrate the issue, let me use a short field, and the value 32768, which is `Short.MAX_VALUE` + 1, e.g. fits only into an int.\n\nCurrent behavior:\n\n| Input | Output (coerce=true) | Output (coerce=false) |\n| --- | --- | --- |\n| 42 | 42 | 42 |\n| 42.12 | 42 | IllegalArgumentException |\n| \"42\" | 42 | IllegalArgumentException |\n| \"42.12\" | NumberFormatException | IllegalArgumentException |\n| 32768 | JsonParseException | JsonParseException |\n| 32768.12 | JsonParseException | JsonParseException |\n| \"32768\" | NumberFormatException | IllegalArgumentException |\n| \"32768.12\" | NumberFormatException | IllegalArgumentException |\n| true | JsonParseException | JsonParseException |\n| \"true\" | NumberFormatException | IllegalArgumentException |\n\nThe issue is that 42.12 is accepted, but \"42.12\" is not, even though coerce also supposed to mean to accept the number as a string.\nOther issue is that overflow is sometimes an IllegalArgumentException, which can be ignored using ignore_malformed, sometimes it is JsonParseException, which can not be ignored.\nWith bytes, there is also the automatic cast, which results in even the integer of the number changing (e.g. from 266 to 10).\n\nSo one possible behavior is that the integer part must be in range, but when `coerce` is true the number can have a fractional part which is cut off and/or it can also be input as string.\n\n| Input | Output (coerce=true) | Output (coerce=false) |\n| --- | --- | --- |\n| 42 | 42 | 42 |\n| 42.12 | 42 | IllegalArgumentException |\n| \"42\" | 42 | IllegalArgumentException |\n| \"42.12\" | **42** | IllegalArgumentException |\n| 32768 | **IllegalArgumentException** | **IllegalArgumentException** |\n| 32768.12 | **IllegalArgumentException** | **IllegalArgumentException** |\n| \"32768\" | NumberFormatException | IllegalArgumentException |\n| \"32768.12\" | NumberFormatException | IllegalArgumentException |\n| true | **IllegalArgumentException** | **IllegalArgumentException** |\n| \"true\" | NumberFormatException | IllegalArgumentException |\n\nThe alternative is to allow the implicit casting of out-of-range values when coerce is true, not only for bytes but for other types as well, but that can be a little bit confusing e.g. turning into negative value in this case.\n\n| Input | Output (coerce=true) | Output (coerce=false) |\n| --- | --- | --- |\n| 42 | 42 | 42 |\n| 42.12 | 42 | IllegalArgumentException |\n| \"42\" | 42 | IllegalArgumentException |\n| \"42.12\" | **42** | IllegalArgumentException |\n| 32768 | :boom: **-32768** | **IllegalArgumentException** |\n| 32768.12 | :boom: **-32768** | **IllegalArgumentException** |\n| \"32768\" | :boom: **-32768** | IllegalArgumentException |\n| \"32768.12\" | :boom: **-32768** | IllegalArgumentException |\n| true | **IllegalArgumentException** | **IllegalArgumentException** |\n| \"true\" | NumberFormatException | IllegalArgumentException |\n\nReading back I'm getting mixed signals on this casting aspect of coerce from @passing and @clintongormley.\n", "created_at": "2015-06-18T22:06:30Z" }, { "body": "When is it planned to be fixed? We have a use case where we need this functionality.\n", "created_at": "2015-11-29T09:29:53Z" }, { "body": "It seems that not only byte allows decimals, I see the same behavior with short, integer, and long too.\nWith this mapping\n\n```\n{ \"mappings\": {\n \"example\": {\n \"properties\": {\n \"number_long\": { \"type\": \"long\", \"coerce\": true },\n \"number_integer\": { \"type\": \"integer\", \"coerce\": true },\n \"number_short\": { \"type\": \"short\", \"coerce\": true }}}}}\n```\n\nand document\n\n```\n{ \"number_long\": 42.5,\n \"number_integer\": 42.5,\n \"number_short\": 42.5 }\n```\n\nI am getting values that are not integers. `_search?fields=number_long,number_integer,number_short,_source`\n\n```\n\"hits\": [{\n \"_index\": \"test-integers\",\n \"_type\": \"example\",\n \"_id\": \"1\",\n \"_score\": 1,\n \"_source\": {\n \"number_long\": 42.5,\n \"number_integer\": 42.5,\n \"number_short\": 42.5 },\n \"fields\": {\n \"number_long\": [ 42.5 ],\n \"number_integer\": [ 42.5 ],\n \"number_short\": [ 42.5 ]}}]\n```\n\nBehavior is same on 1.7.4 and 2.2.0.\n\n@szroland How you managed to get 42 from 42.12 with coercion enabled?\n", "created_at": "2016-02-23T18:46:25Z" }, { "body": "@martinhynar those values are retrieved from the `_source`, which elasticsearch does not modify. So if the source contains 42.5 for a field value, trying to read this value from the source will always return 42.5 regardless of the mappings.\n", "created_at": "2016-02-24T05:40:17Z" }, { "body": "@jpountz Thanks for explanation.\nI updated my example locally by adding `\"store\":true` to all fields and now the values returned by search are all 42. I did not realized that store is false by default and therefore values are extracted from `_source`.\n", "created_at": "2016-02-24T07:58:17Z" } ], "number": 11513, "title": "for fields with numeric type 'byte', elasticsearch accepts values greater than 127 and indexes them wrongly" }
{ "body": "This makes all numeric fields including `date`, `ip` and `token_count` use\npoints instead of the inverted index as a lookup structure. This is expected\nto perform worse for exact queries, but faster for range queries. It also\nrequires less storage.\n\nNotes about how the change works:\n- Numeric mappers have been split into a legacy version that is essentially\n the current mapper, and a new version that uses points, eg.\n LegacyDateFieldMapper and DateFieldMapper.\n- Since new and old fields have the same names, the decision about which one\n to use is made based on the index creation version.\n- If you try to force using a legacy field on a new index or a field that uses\n points on an old index, you will get an exception.\n- IP addresses now support IPv6 via Lucene's InetAddressPoint and store them\n in SORTED_SET doc values using the same encoding (fixed length of 16 bytes\n and sortable).\n- The internal MappedFieldType that is stored by the new mappers does not have\n any of the points-related properties set. Instead, it keeps setting the index\n options when parsing the `index` property of mappings and does\n `if (fieldType.indexOptions() != IndexOptions.NONE) { // add point field }`\n when parsing documents.\n\nKnown issues that won't fix:\n- You can't use numeric fields in significant terms aggregations anymore since\n this requires document frequencies, which points do not record.\n- Term queries on numeric fields will now return constant scores instead of\n giving better scores to the rare values.\n\nKnown issues that we could work around (in follow-up PRs, this one is too large\nalready):\n- Range queries on `ip` addresses only work if both the lower and upper bounds\n are inclusive (exclusive bounds are not exposed in Lucene). We could either\n decide to implement it, or drop range support entirely and tell users to\n query subnets using the CIDR notation instead.\n- Since IP addresses now use a different representation for doc values,\n aggregations will fail when running a terms aggregation on an ip field on a\n list of indices that contains both pre-5.0 and 5.0 indices.\n- The ip range aggregation does not work on the new ip field. We need to either\n implement range aggs for SORTED_SET doc values or drop support for ip ranges\n and tell users to use filters instead. #17700\n\nCloses #16751\nCloses #17007\nCloses #11513\n", "number": 17746, "review_comments": [], "title": "Use the new points API to index numeric fields." }
{ "commits": [ { "message": "Use the new points API to index numeric fields. #17746\n\nThis makes all numeric fields including `date`, `ip` and `token_count` use\npoints instead of the inverted index as a lookup structure. This is expected\nto perform worse for exact queries, but faster for range queries. It also\nrequires less storage.\n\nNotes about how the change works:\n - Numeric mappers have been split into a legacy version that is essentially\n the current mapper, and a new version that uses points, eg.\n LegacyDateFieldMapper and DateFieldMapper.\n - Since new and old fields have the same names, the decision about which one\n to use is made based on the index creation version.\n - If you try to force using a legacy field on a new index or a field that uses\n points on an old index, you will get an exception.\n - IP addresses now support IPv6 via Lucene's InetAddressPoint and store them\n in SORTED_SET doc values using the same encoding (fixed length of 16 bytes\n and sortable).\n - The internal MappedFieldType that is stored by the new mappers does not have\n any of the points-related properties set. Instead, it keeps setting the index\n options when parsing the `index` property of mappings and does\n `if (fieldType.indexOptions() != IndexOptions.NONE) { // add point field }`\n when parsing documents.\n\nKnown issues that won't fix:\n - You can't use numeric fields in significant terms aggregations anymore since\n this requires document frequencies, which points do not record.\n - Term queries on numeric fields will now return constant scores instead of\n giving better scores to the rare values.\n\nKnown issues that we could work around (in follow-up PRs, this one is too large\nalready):\n - Range queries on `ip` addresses only work if both the lower and upper bounds\n are inclusive (exclusive bounds are not exposed in Lucene). We could either\n decide to implement it, or drop range support entirely and tell users to\n query subnets using the CIDR notation instead.\n - Since IP addresses now use a different representation for doc values,\n aggregations will fail when running a terms aggregation on an ip field on a\n list of indices that contains both pre-5.0 and 5.0 indices.\n - The ip range aggregation does not work on the new ip field. We need to either\n implement range aggs for SORTED_SET doc values or drop support for ip ranges\n and tell users to use filters instead. #17700\n\nCloses #16751\nCloses #17007\nCloses #11513" } ], "files": [ { "diff": "@@ -471,12 +471,12 @@\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]ParseContext.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]ParsedDocument.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]CompletionFieldMapper.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]DateFieldMapper.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]DoubleFieldMapper.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]FloatFieldMapper.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]NumberFieldMapper.java\" checks=\"LineLength\" />\n+ <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]LegacyDateFieldMapper.java\" checks=\"LineLength\" />\n+ <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]LegacyDoubleFieldMapper.java\" checks=\"LineLength\" />\n+ <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]LegacyFloatFieldMapper.java\" checks=\"LineLength\" />\n+ <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]LegacyNumberFieldMapper.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]StringFieldMapper.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]TokenCountFieldMapper.java\" checks=\"LineLength\" />\n+ <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]LegacyTokenCountFieldMapper.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]TypeParsers.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]geo[/\\\\]BaseGeoPointFieldMapper.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]geo[/\\\\]GeoPointFieldMapper.java\" checks=\"LineLength\" />\n@@ -1070,8 +1070,8 @@\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]BooleanFieldMapperTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]CompletionFieldTypeTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]MultiFieldCopyToMapperTests.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]TokenCountFieldMapperTests.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]date[/\\\\]SimpleDateMappingTests.java\" checks=\"LineLength\" />\n+ <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]core[/\\\\]LegacyTokenCountFieldMapperTests.java\" checks=\"LineLength\" />\n+ <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]date[/\\\\]LegacyDateMappingTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]dynamictemplate[/\\\\]genericstore[/\\\\]GenericStoreDynamicTemplateTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]dynamictemplate[/\\\\]pathmatch[/\\\\]PathMatchDynamicTemplateTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]dynamictemplate[/\\\\]simple[/\\\\]SimpleDynamicTemplatesTests.java\" checks=\"LineLength\" />\n@@ -1087,12 +1087,12 @@\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]index[/\\\\]IndexTypeMapperTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]internal[/\\\\]FieldNamesFieldMapperTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]internal[/\\\\]TypeFieldMapperTests.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]ip[/\\\\]SimpleIpMappingTests.java\" checks=\"LineLength\" />\n+ <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]ip[/\\\\]LegacyIpMappingTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]merge[/\\\\]TestMergeMapperTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]multifield[/\\\\]MultiFieldTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]multifield[/\\\\]merge[/\\\\]JavaMultiFieldMergeTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]nested[/\\\\]NestedMappingTests.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]numeric[/\\\\]SimpleNumericTests.java\" checks=\"LineLength\" />\n+ <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]numeric[/\\\\]LegacyNumericTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]object[/\\\\]NullValueObjectMappingTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]object[/\\\\]SimpleObjectMappingTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]mapper[/\\\\]parent[/\\\\]ParentMappingTests.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -28,3 +28,6 @@ java.security.MessageDigest#clone() @ use org.elasticsearch.common.hash.MessageD\n \n @defaultMessage this should not have been added to lucene in the first place\n org.apache.lucene.index.IndexReader#getCombinedCoreAndDeletesKey()\n+\n+@defaultMessage Soon to be removed\n+org.apache.lucene.document.FieldType#numericType()", "filename": "buildSrc/src/main/resources/forbidden/es-all-signatures.txt", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyDateFieldMapper;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.support.QueryParsers;\n \n@@ -336,11 +337,12 @@ private Query getRangeQuerySingle(String field, String part1, String part2,\n \n try {\n Query rangeQuery;\n- if (currentFieldType instanceof DateFieldMapper.DateFieldType && settings.timeZone() != null) {\n- DateFieldMapper.DateFieldType dateFieldType =\n- (DateFieldMapper.DateFieldType) this.currentFieldType;\n- rangeQuery = dateFieldType.rangeQuery(part1, part2, startInclusive, endInclusive,\n- settings.timeZone(), null);\n+ if (currentFieldType instanceof LegacyDateFieldMapper.DateFieldType && settings.timeZone() != null) {\n+ LegacyDateFieldMapper.DateFieldType dateFieldType = (LegacyDateFieldMapper.DateFieldType) this.currentFieldType;\n+ rangeQuery = dateFieldType.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null);\n+ } else if (currentFieldType instanceof DateFieldMapper.DateFieldType && settings.timeZone() != null) {\n+ DateFieldMapper.DateFieldType dateFieldType = (DateFieldMapper.DateFieldType) this.currentFieldType;\n+ rangeQuery = dateFieldType.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null);\n } else {\n rangeQuery = currentFieldType.rangeQuery(part1, part2, startInclusive, endInclusive);\n }", "filename": "core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java", "status": "modified" }, { "diff": "@@ -19,31 +19,35 @@\n \n package org.elasticsearch.action.fieldstats;\n \n+import org.apache.lucene.document.InetAddressPoint;\n import org.apache.lucene.util.BytesRef;\n+import org.apache.lucene.util.StringHelper;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.network.NetworkAddress;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentBuilderString;\n-import org.elasticsearch.index.mapper.ip.IpFieldMapper;\n-import org.joda.time.DateTime;\n \n import java.io.IOException;\n+import java.net.InetAddress;\n+import java.net.UnknownHostException;\n \n-public abstract class FieldStats<T extends Comparable<T>> implements Streamable, ToXContent {\n+public abstract class FieldStats<T> implements Streamable, ToXContent {\n \n- private byte type;\n+ private final byte type;\n private long maxDoc;\n private long docCount;\n private long sumDocFreq;\n private long sumTotalTermFreq;\n protected T minValue;\n protected T maxValue;\n \n- protected FieldStats() {\n+ protected FieldStats(int type) {\n+ this.type = (byte) type;\n }\n \n protected FieldStats(int type, long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq) {\n@@ -148,17 +152,6 @@ public T getMaxValue() {\n */\n protected abstract T valueOf(String value, String optionalFormat);\n \n- /**\n- * @param value\n- * The value to be converted to a String\n- * @param optionalFormat\n- * A string describing how to print the specified value. Whether\n- * this parameter is supported depends on the implementation. If\n- * optionalFormat is specified and the implementation doesn't\n- * support it an {@link UnsupportedOperationException} is thrown\n- */\n- public abstract String stringValueOf(Object value, String optionalFormat);\n-\n /**\n * Merges the provided stats into this stats instance.\n */\n@@ -181,16 +174,18 @@ public void append(FieldStats stats) {\n }\n }\n \n+ protected abstract int compare(T a, T b);\n+\n /**\n * @return <code>true</code> if this instance matches with the provided index constraint, otherwise <code>false</code> is returned\n */\n public boolean match(IndexConstraint constraint) {\n int cmp;\n T value = valueOf(constraint.getValue(), constraint.getOptionalFormat());\n if (constraint.getProperty() == IndexConstraint.Property.MIN) {\n- cmp = minValue.compareTo(value);\n+ cmp = compare(minValue, value);\n } else if (constraint.getProperty() == IndexConstraint.Property.MAX) {\n- cmp = maxValue.compareTo(value);\n+ cmp = compare(maxValue, value);\n } else {\n throw new IllegalArgumentException(\"Unsupported property [\" + constraint.getProperty() + \"]\");\n }\n@@ -246,9 +241,25 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeLong(sumTotalTermFreq);\n }\n \n- public static class Long extends FieldStats<java.lang.Long> {\n+ private static abstract class ComparableFieldStats<T extends Comparable<? super T>> extends FieldStats<T> {\n+ protected ComparableFieldStats(int type) {\n+ super(type);\n+ }\n+\n+ protected ComparableFieldStats(int type, long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq) {\n+ super(type, maxDoc, docCount, sumDocFreq, sumTotalTermFreq);\n+ }\n+\n+ @Override\n+ protected int compare(T a, T b) {\n+ return a.compareTo(b);\n+ }\n+ }\n+\n+ public static class Long extends ComparableFieldStats<java.lang.Long> {\n \n public Long() {\n+ super(0);\n }\n \n public Long(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, long minValue, long maxValue) {\n@@ -287,18 +298,6 @@ protected java.lang.Long valueOf(String value, String optionalFormat) {\n return java.lang.Long.valueOf(value);\n }\n \n- @Override\n- public String stringValueOf(Object value, String optionalFormat) {\n- if (optionalFormat != null) {\n- throw new UnsupportedOperationException(\"custom format isn't supported\");\n- }\n- if (value instanceof Number) {\n- return java.lang.Long.toString(((Number) value).longValue());\n- } else {\n- throw new IllegalArgumentException(\"value must be a Long: \" + value);\n- }\n- }\n-\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n@@ -315,9 +314,10 @@ public void writeTo(StreamOutput out) throws IOException {\n \n }\n \n- public static final class Double extends FieldStats<java.lang.Double> {\n+ public static final class Double extends ComparableFieldStats<java.lang.Double> {\n \n public Double() {\n+ super(2);\n }\n \n public Double(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, double minValue, double maxValue) {\n@@ -352,18 +352,6 @@ protected java.lang.Double valueOf(String value, String optionalFormat) {\n return java.lang.Double.valueOf(value);\n }\n \n- @Override\n- public String stringValueOf(Object value, String optionalFormat) {\n- if (optionalFormat != null) {\n- throw new UnsupportedOperationException(\"custom format isn't supported\");\n- }\n- if (value instanceof Number) {\n- return java.lang.Double.toString(((Number) value).doubleValue());\n- } else {\n- throw new IllegalArgumentException(\"value must be a Double: \" + value);\n- }\n- }\n-\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n@@ -380,9 +368,10 @@ public void writeTo(StreamOutput out) throws IOException {\n \n }\n \n- public static final class Text extends FieldStats<BytesRef> {\n+ public static final class Text extends ComparableFieldStats<BytesRef> {\n \n public Text() {\n+ super(3);\n }\n \n public Text(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, BytesRef minValue, BytesRef maxValue) {\n@@ -421,18 +410,6 @@ protected BytesRef valueOf(String value, String optionalFormat) {\n return new BytesRef(value);\n }\n \n- @Override\n- public String stringValueOf(Object value, String optionalFormat) {\n- if (optionalFormat != null) {\n- throw new UnsupportedOperationException(\"custom format isn't supported\");\n- }\n- if (value instanceof BytesRef) {\n- return ((BytesRef) value).utf8ToString();\n- } else {\n- throw new IllegalArgumentException(\"value must be a BytesRef: \" + value);\n- }\n- }\n-\n @Override\n protected void toInnerXContent(XContentBuilder builder) throws IOException {\n builder.field(Fields.MIN_VALUE, getMinValueAsString());\n@@ -486,25 +463,6 @@ protected java.lang.Long valueOf(String value, String optionalFormat) {\n return dateFormatter.parser().parseMillis(value);\n }\n \n- @Override\n- public String stringValueOf(Object value, String optionalFormat) {\n- FormatDateTimeFormatter dateFormatter = this.dateFormatter;\n- if (optionalFormat != null) {\n- dateFormatter = Joda.forPattern(optionalFormat);\n- }\n- long millis;\n- if (value instanceof java.lang.Long) {\n- millis = ((java.lang.Long) value).longValue();\n- } else if (value instanceof DateTime) {\n- millis = ((DateTime) value).getMillis();\n- } else if (value instanceof BytesRef) {\n- millis = dateFormatter.parser().parseMillis(((BytesRef) value).utf8ToString());\n- } else {\n- throw new IllegalArgumentException(\"value must be either a DateTime or a long: \" + value);\n- }\n- return dateFormatter.printer().print(millis);\n- }\n-\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n@@ -519,25 +477,59 @@ public void writeTo(StreamOutput out) throws IOException {\n \n }\n \n- public static class Ip extends Long {\n+ public static class Ip extends FieldStats<InetAddress> {\n \n- public Ip(int maxDoc, int docCount, long sumDocFreq, long sumTotalTermFreq, long minValue, long maxValue) {\n- super(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, minValue, maxValue);\n- }\n+ private InetAddress minValue, maxValue;\n \n- protected Ip(int type, long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, long minValue, long maxValue) {\n- super(type, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, minValue, maxValue);\n+ public Ip(int maxDoc, int docCount, long sumDocFreq, long sumTotalTermFreq,\n+ InetAddress minValue, InetAddress maxValue) {\n+ super(4, maxDoc, docCount, sumDocFreq, sumTotalTermFreq);\n+ this.minValue = minValue;\n+ this.maxValue = maxValue;\n }\n \n public Ip() {\n+ super(4);\n+ }\n+\n+ @Override\n+ public String getMinValueAsString() {\n+ return NetworkAddress.format(minValue);\n+ }\n+\n+ @Override\n+ public String getMaxValueAsString() {\n+ return NetworkAddress.format(maxValue);\n }\n \n @Override\n- public String stringValueOf(Object value, String optionalFormat) {\n- if (value instanceof BytesRef) {\n- return super.stringValueOf(IpFieldMapper.ipToLong(((BytesRef) value).utf8ToString()), optionalFormat);\n+ protected InetAddress valueOf(String value, String optionalFormat) {\n+ try {\n+ return InetAddress.getByName(value);\n+ } catch (UnknownHostException e) {\n+ throw new RuntimeException(e);\n }\n- return super.stringValueOf(value, optionalFormat);\n+ }\n+\n+ @Override\n+ protected int compare(InetAddress a, InetAddress b) {\n+ byte[] ab = InetAddressPoint.encode(a);\n+ byte[] bb = InetAddressPoint.encode(b);\n+ return StringHelper.compare(ab.length, ab, 0, bb, 0);\n+ }\n+\n+ @Override\n+ public void readFrom(StreamInput in) throws IOException {\n+ super.readFrom(in);\n+ minValue = valueOf(in.readString(), null);\n+ maxValue = valueOf(in.readString(), null);\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ super.writeTo(out);\n+ out.writeString(NetworkAddress.format(minValue));\n+ out.writeString(NetworkAddress.format(maxValue));\n }\n }\n \n@@ -557,10 +549,12 @@ public static FieldStats read(StreamInput in) throws IOException {\n case 3:\n stats = new Text();\n break;\n+ case 4:\n+ stats = new Ip();\n+ break;\n default:\n throw new IllegalArgumentException(\"Illegal type [\" + type + \"]\");\n }\n- stats.type = type;\n stats.readFrom(in);\n return stats;\n }", "filename": "core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java", "status": "modified" }, { "diff": "@@ -0,0 +1,81 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper;\n+\n+import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.analysis.TokenStream;\n+import org.apache.lucene.document.FieldType;\n+import org.apache.lucene.index.DocValuesType;\n+import org.apache.lucene.index.IndexableField;\n+import org.apache.lucene.index.IndexableFieldType;\n+\n+import java.io.Reader;\n+\n+// used for binary and geo fields\n+public abstract class CustomDocValuesField implements IndexableField {\n+\n+ public static final FieldType TYPE = new FieldType();\n+ static {\n+ TYPE.setDocValuesType(DocValuesType.BINARY);\n+ TYPE.freeze();\n+ }\n+\n+ private final String name;\n+\n+ public CustomDocValuesField(String name) {\n+ this.name = name;\n+ }\n+\n+ @Override\n+ public String name() {\n+ return name;\n+ }\n+\n+ @Override\n+ public IndexableFieldType fieldType() {\n+ return TYPE;\n+ }\n+\n+ @Override\n+ public float boost() {\n+ return 1f;\n+ }\n+\n+ @Override\n+ public String stringValue() {\n+ return null;\n+ }\n+\n+ @Override\n+ public Reader readerValue() {\n+ return null;\n+ }\n+\n+ @Override\n+ public Number numericValue() {\n+ return null;\n+ }\n+\n+ @Override\n+ public TokenStream tokenStream(Analyzer analyzer, TokenStream reuse) {\n+ return null;\n+ }\n+\n+}", "filename": "core/src/main/java/org/elasticsearch/index/mapper/CustomDocValuesField.java", "status": "added" }, { "diff": "@@ -29,6 +29,7 @@\n import org.apache.lucene.document.Field;\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.util.CloseableThreadLocal;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.xcontent.XContentHelper;\n@@ -37,13 +38,14 @@\n import org.elasticsearch.index.mapper.core.BinaryFieldMapper;\n import org.elasticsearch.index.mapper.core.BooleanFieldMapper;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n-import org.elasticsearch.index.mapper.core.DateFieldMapper.DateFieldType;\n-import org.elasticsearch.index.mapper.core.DoubleFieldMapper;\n-import org.elasticsearch.index.mapper.core.FloatFieldMapper;\n-import org.elasticsearch.index.mapper.core.IntegerFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyDateFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyDoubleFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyFloatFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyIntegerFieldMapper;\n import org.elasticsearch.index.mapper.core.KeywordFieldMapper;\n import org.elasticsearch.index.mapper.core.KeywordFieldMapper.KeywordFieldType;\n-import org.elasticsearch.index.mapper.core.LongFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyLongFieldMapper;\n+import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n import org.elasticsearch.index.mapper.core.StringFieldMapper.StringFieldType;\n import org.elasticsearch.index.mapper.core.TextFieldMapper;\n@@ -622,44 +624,93 @@ private static Mapper.Builder<?,?> createBuilderFromFieldType(final ParseContext\n if (builder == null) {\n builder = new KeywordFieldMapper.Builder(currentFieldName);\n }\n- } else if (fieldType instanceof DateFieldType) {\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"date\");\n- if (builder == null) {\n- builder = new DateFieldMapper.Builder(currentFieldName);\n- }\n- } else if (fieldType.numericType() != null) {\n- switch (fieldType.numericType()) {\n- case LONG:\n+ } else {\n+ switch (fieldType.typeName()) {\n+ case \"date\":\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"date\");\n+ if (builder == null) {\n+ builder = newDateBuilder(currentFieldName, null, Version.indexCreated(context.indexSettings()));\n+ }\n+ break;\n+ case \"long\":\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"long\");\n if (builder == null) {\n- builder = new LongFieldMapper.Builder(currentFieldName);\n+ builder = newLongBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n break;\n- case DOUBLE:\n+ case \"double\":\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"double\");\n if (builder == null) {\n- builder = new DoubleFieldMapper.Builder(currentFieldName);\n+ builder = newDoubleBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n break;\n- case INT:\n+ case \"integer\":\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"integer\");\n if (builder == null) {\n- builder = new IntegerFieldMapper.Builder(currentFieldName);\n+ builder = newIntBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n break;\n- case FLOAT:\n+ case \"float\":\n builder = context.root().findTemplateBuilder(context, currentFieldName, \"float\");\n if (builder == null) {\n- builder = new FloatFieldMapper.Builder(currentFieldName);\n+ builder = newFloatBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n break;\n default:\n- throw new AssertionError(\"Unexpected numeric type \" + fieldType.numericType());\n+ break;\n }\n }\n return builder;\n }\n \n+ private static Mapper.Builder<?, ?> newLongBuilder(String name, Version indexCreated) {\n+ if (indexCreated.onOrAfter(Version.V_5_0_0)) {\n+ return new NumberFieldMapper.Builder(name, NumberFieldMapper.NumberType.LONG);\n+ } else {\n+ return new LegacyLongFieldMapper.Builder(name);\n+ }\n+ }\n+\n+ private static Mapper.Builder<?, ?> newIntBuilder(String name, Version indexCreated) {\n+ if (indexCreated.onOrAfter(Version.V_5_0_0)) {\n+ return new NumberFieldMapper.Builder(name, NumberFieldMapper.NumberType.INTEGER);\n+ } else {\n+ return new LegacyIntegerFieldMapper.Builder(name);\n+ }\n+ }\n+\n+ private static Mapper.Builder<?, ?> newDoubleBuilder(String name, Version indexCreated) {\n+ if (indexCreated.onOrAfter(Version.V_5_0_0)) {\n+ return new NumberFieldMapper.Builder(name, NumberFieldMapper.NumberType.DOUBLE);\n+ } else {\n+ return new LegacyDoubleFieldMapper.Builder(name);\n+ }\n+ }\n+\n+ private static Mapper.Builder<?, ?> newFloatBuilder(String name, Version indexCreated) {\n+ if (indexCreated.onOrAfter(Version.V_5_0_0)) {\n+ return new NumberFieldMapper.Builder(name, NumberFieldMapper.NumberType.FLOAT);\n+ } else {\n+ return new LegacyFloatFieldMapper.Builder(name);\n+ }\n+ }\n+\n+ private static Mapper.Builder<?, ?> newDateBuilder(String name, FormatDateTimeFormatter dateTimeFormatter, Version indexCreated) {\n+ if (indexCreated.onOrAfter(Version.V_5_0_0)) {\n+ DateFieldMapper.Builder builder = new DateFieldMapper.Builder(name);\n+ if (dateTimeFormatter != null) {\n+ builder.dateTimeFormatter(dateTimeFormatter);\n+ }\n+ return builder;\n+ } else {\n+ LegacyDateFieldMapper.Builder builder = new LegacyDateFieldMapper.Builder(name);\n+ if (dateTimeFormatter != null) {\n+ builder.dateTimeFormatter(dateTimeFormatter);\n+ }\n+ return builder;\n+ }\n+ }\n+\n private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseContext context, XContentParser.Token token, String currentFieldName) throws IOException {\n if (token == XContentParser.Token.VALUE_STRING) {\n // do a quick test to see if its fits a dynamic template, if so, use it.\n@@ -681,7 +732,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n dateTimeFormatter.parser().parseMillis(text);\n Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"date\");\n if (builder == null) {\n- builder = new DateFieldMapper.Builder(currentFieldName).dateTimeFormatter(dateTimeFormatter);\n+ builder = newDateBuilder(currentFieldName, dateTimeFormatter, Version.indexCreated(context.indexSettings()));\n }\n return builder;\n } catch (Exception e) {\n@@ -696,7 +747,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n Long.parseLong(text);\n Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"long\");\n if (builder == null) {\n- builder = new LongFieldMapper.Builder(currentFieldName);\n+ builder = newLongBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n return builder;\n } catch (NumberFormatException e) {\n@@ -706,7 +757,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n Double.parseDouble(text);\n Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"double\");\n if (builder == null) {\n- builder = new FloatFieldMapper.Builder(currentFieldName);\n+ builder = newFloatBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n return builder;\n } catch (NumberFormatException e) {\n@@ -724,7 +775,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n if (numberType == XContentParser.NumberType.INT || numberType == XContentParser.NumberType.LONG) {\n Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"long\");\n if (builder == null) {\n- builder = new LongFieldMapper.Builder(currentFieldName);\n+ builder = newLongBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n return builder;\n } else if (numberType == XContentParser.NumberType.FLOAT || numberType == XContentParser.NumberType.DOUBLE) {\n@@ -733,7 +784,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n // no templates are defined, we use float by default instead of double\n // since this is much more space-efficient and should be enough most of\n // the time\n- builder = new FloatFieldMapper.Builder(currentFieldName);\n+ builder = newFloatBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n return builder;\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -31,7 +31,6 @@\n import org.apache.lucene.search.MultiTermQuery;\n import org.apache.lucene.search.PrefixQuery;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.RegexpQuery;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.TermRangeQuery;\n import org.apache.lucene.search.BoostQuery;\n@@ -358,15 +357,7 @@ public Query prefixQuery(String value, @Nullable MultiTermQuery.RewriteMethod me\n }\n \n public Query regexpQuery(String value, int flags, int maxDeterminizedStates, @Nullable MultiTermQuery.RewriteMethod method, @Nullable QueryShardContext context) {\n- if (numericType() != null) {\n- throw new QueryShardException(context, \"Cannot use regular expression to filter numeric field [\" + name + \"]\");\n- }\n-\n- RegexpQuery query = new RegexpQuery(new Term(name(), indexedValueForSearch(value)), flags, maxDeterminizedStates);\n- if (method != null) {\n- query.setRewriteMethod(method);\n- }\n- return query;\n+ throw new QueryShardException(context, \"Can only use regular expression on keyword and text fields - not on [\" + name + \"] which is of type [\" + typeName() + \"]\");\n }\n \n public Query nullValueQuery() {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.plain.BytesBinaryDVIndexFieldData;\n+import org.elasticsearch.index.mapper.CustomDocValuesField;\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.Mapper;\n@@ -178,7 +179,7 @@ protected String contentType() {\n return CONTENT_TYPE;\n }\n \n- public static class CustomBinaryDocValuesField extends NumberFieldMapper.CustomNumericDocValuesField {\n+ public static class CustomBinaryDocValuesField extends CustomDocValuesField {\n \n private final ObjectArrayList<byte[]> bytesList;\n ", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/BinaryFieldMapper.java", "status": "modified" }, { "diff": "@@ -20,38 +20,38 @@\n package org.elasticsearch.index.mapper.core;\n \n import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.LongPoint;\n+import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.document.StoredField;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexReader;\n-import org.apache.lucene.index.MultiFields;\n-import org.apache.lucene.index.Terms;\n-import org.apache.lucene.search.LegacyNumericRangeQuery;\n+import org.apache.lucene.index.PointValues;\n+import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n-import org.apache.lucene.util.BytesRefBuilder;\n-import org.apache.lucene.util.LegacyNumericUtils;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.fieldstats.FieldStats;\n import org.elasticsearch.common.Explicit;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.Numbers;\n-import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.joda.DateMathParser;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.common.util.LocaleUtils;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType;\n import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData;\n+import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.index.mapper.core.LongFieldMapper.CustomLongNumericField;\n+import org.elasticsearch.index.mapper.core.LegacyNumberFieldMapper.Defaults;\n+import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n+import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.internal.SearchContext;\n import org.joda.time.DateTimeZone;\n@@ -63,37 +63,24 @@\n import java.util.Map;\n import java.util.Objects;\n import java.util.concurrent.Callable;\n-import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.index.mapper.core.TypeParsers.parseDateTimeFormatter;\n-import static org.elasticsearch.index.mapper.core.TypeParsers.parseNumberField;\n \n-public class DateFieldMapper extends NumberFieldMapper {\n+/** A {@link FieldMapper} for ip addresses. */\n+public class DateFieldMapper extends FieldMapper implements AllFieldMapper.IncludeInAll {\n \n public static final String CONTENT_TYPE = \"date\";\n+ public static final FormatDateTimeFormatter DEFAULT_DATE_TIME_FORMATTER = Joda.forPattern(\n+ \"strict_date_optional_time||epoch_millis\", Locale.ROOT);\n \n- public static class Defaults extends NumberFieldMapper.Defaults {\n- public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(\"strict_date_optional_time||epoch_millis\", Locale.ROOT);\n- public static final TimeUnit TIME_UNIT = TimeUnit.MILLISECONDS;\n- public static final DateFieldType FIELD_TYPE = new DateFieldType();\n-\n- static {\n- FIELD_TYPE.freeze();\n- }\n-\n- public static final String NULL_VALUE = null;\n- }\n-\n- public static class Builder extends NumberFieldMapper.Builder<Builder, DateFieldMapper> {\n-\n- protected String nullValue = Defaults.NULL_VALUE;\n+ public static class Builder extends FieldMapper.Builder<Builder, DateFieldMapper> {\n \n+ private Boolean ignoreMalformed;\n private Locale locale;\n \n public Builder(String name) {\n- super(name, Defaults.FIELD_TYPE, Defaults.PRECISION_STEP_64_BIT);\n+ super(name, new DateFieldType(), new DateFieldType());\n builder = this;\n- // do *NOT* rely on the default locale\n locale = Locale.ROOT;\n }\n \n@@ -102,86 +89,89 @@ public DateFieldType fieldType() {\n return (DateFieldType)fieldType;\n }\n \n- public Builder timeUnit(TimeUnit timeUnit) {\n- fieldType().setTimeUnit(timeUnit);\n- return this;\n+ public Builder ignoreMalformed(boolean ignoreMalformed) {\n+ this.ignoreMalformed = ignoreMalformed;\n+ return builder;\n }\n \n- public Builder nullValue(String nullValue) {\n- this.nullValue = nullValue;\n- return this;\n+ protected Explicit<Boolean> ignoreMalformed(BuilderContext context) {\n+ if (ignoreMalformed != null) {\n+ return new Explicit<>(ignoreMalformed, true);\n+ }\n+ if (context.indexSettings() != null) {\n+ return new Explicit<>(IGNORE_MALFORMED_SETTING.get(context.indexSettings()), false);\n+ }\n+ return Defaults.IGNORE_MALFORMED;\n }\n \n public Builder dateTimeFormatter(FormatDateTimeFormatter dateTimeFormatter) {\n fieldType().setDateTimeFormatter(dateTimeFormatter);\n return this;\n }\n \n- @Override\n- public DateFieldMapper build(BuilderContext context) {\n- setupFieldType(context);\n- fieldType.setNullValue(nullValue);\n- DateFieldMapper fieldMapper = new DateFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context),\n- coerce(context), context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo);\n- return (DateFieldMapper) fieldMapper.includeInAll(includeInAll);\n+ public void locale(Locale locale) {\n+ this.locale = locale;\n }\n \n @Override\n protected void setupFieldType(BuilderContext context) {\n+ super.setupFieldType(context);\n FormatDateTimeFormatter dateTimeFormatter = fieldType().dateTimeFormatter;\n if (!locale.equals(dateTimeFormatter.locale())) {\n- fieldType().setDateTimeFormatter(new FormatDateTimeFormatter(dateTimeFormatter.format(), dateTimeFormatter.parser(), dateTimeFormatter.printer(), locale));\n+ fieldType().setDateTimeFormatter( new FormatDateTimeFormatter(dateTimeFormatter.format(),\n+ dateTimeFormatter.parser(), dateTimeFormatter.printer(), locale));\n }\n- super.setupFieldType(context);\n- }\n-\n- public Builder locale(Locale locale) {\n- this.locale = locale;\n- return this;\n }\n \n @Override\n- protected int maxPrecisionStep() {\n- return 64;\n+ public DateFieldMapper build(BuilderContext context) {\n+ setupFieldType(context);\n+ DateFieldMapper fieldMapper = new DateFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context),\n+ context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo);\n+ return (DateFieldMapper) fieldMapper.includeInAll(includeInAll);\n }\n }\n \n public static class TypeParser implements Mapper.TypeParser {\n+\n+ public TypeParser() {\n+ }\n+\n @Override\n- public Mapper.Builder<?, ?> parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n- DateFieldMapper.Builder builder = new DateFieldMapper.Builder(name);\n- parseNumberField(builder, name, node, parserContext);\n- boolean configuredFormat = false;\n+ public Mapper.Builder<?,?> parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n+ if (parserContext.indexVersionCreated().before(Version.V_5_0_0)) {\n+ return new LegacyDateFieldMapper.TypeParser().parse(name, node, parserContext);\n+ }\n+ Builder builder = new Builder(name);\n+ TypeParsers.parseField(builder, name, node, parserContext);\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();\n- String propName = Strings.toUnderscoreCase(entry.getKey());\n+ String propName = entry.getKey();\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n if (propNode == null) {\n throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n }\n- builder.nullValue(propNode.toString());\n+ builder.nullValue(InetAddresses.forString(propNode.toString()));\n iterator.remove();\n- } else if (propName.equals(\"format\")) {\n- builder.dateTimeFormatter(parseDateTimeFormatter(propNode));\n- configuredFormat = true;\n- iterator.remove();\n- } else if (propName.equals(\"numeric_resolution\")) {\n- builder.timeUnit(TimeUnit.valueOf(propNode.toString().toUpperCase(Locale.ROOT)));\n+ } else if (propName.equals(\"ignore_malformed\")) {\n+ builder.ignoreMalformed(TypeParsers.nodeBooleanValue(\"ignore_malformed\", propNode, parserContext));\n iterator.remove();\n } else if (propName.equals(\"locale\")) {\n builder.locale(LocaleUtils.parse(propNode.toString()));\n iterator.remove();\n+ } else if (propName.equals(\"format\")) {\n+ builder.dateTimeFormatter(parseDateTimeFormatter(propNode));\n+ iterator.remove();\n+ } else if (TypeParsers.parseMultiField(builder, name, parserContext, propName, propNode)) {\n+ iterator.remove();\n }\n }\n- if (!configuredFormat) {\n- builder.dateTimeFormatter(Defaults.DATE_TIME_FORMATTER);\n- }\n return builder;\n }\n }\n \n- public static class DateFieldType extends NumberFieldType {\n+ public static final class DateFieldType extends MappedFieldType {\n \n final class LateParsingQuery extends Query {\n \n@@ -192,7 +182,8 @@ final class LateParsingQuery extends Query {\n final DateTimeZone timeZone;\n final DateMathParser forcedDateParser;\n \n- public LateParsingQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, DateTimeZone timeZone, DateMathParser forcedDateParser) {\n+ public LateParsingQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n+ DateTimeZone timeZone, DateMathParser forcedDateParser) {\n this.lowerTerm = lowerTerm;\n this.upperTerm = upperTerm;\n this.includeLower = includeLower;\n@@ -244,23 +235,24 @@ public String toString(String s) {\n }\n }\n \n- protected FormatDateTimeFormatter dateTimeFormatter = Defaults.DATE_TIME_FORMATTER;\n- protected TimeUnit timeUnit = Defaults.TIME_UNIT;\n- protected DateMathParser dateMathParser = new DateMathParser(dateTimeFormatter);\n+ protected FormatDateTimeFormatter dateTimeFormatter;\n+ protected DateMathParser dateMathParser;\n \n- public DateFieldType() {\n- super(LegacyNumericType.LONG);\n+ DateFieldType() {\n+ super();\n+ setTokenized(false);\n+ setHasDocValues(true);\n+ setOmitNorms(true);\n+ setDateTimeFormatter(DEFAULT_DATE_TIME_FORMATTER);\n }\n \n- protected DateFieldType(DateFieldType ref) {\n- super(ref);\n- this.dateTimeFormatter = ref.dateTimeFormatter;\n- this.timeUnit = ref.timeUnit;\n- this.dateMathParser = ref.dateMathParser;\n+ DateFieldType(DateFieldType other) {\n+ super(other);\n+ setDateTimeFormatter(other.dateTimeFormatter);\n }\n \n @Override\n- public DateFieldType clone() {\n+ public MappedFieldType clone() {\n return new DateFieldType(this);\n }\n \n@@ -269,13 +261,12 @@ public boolean equals(Object o) {\n if (!super.equals(o)) return false;\n DateFieldType that = (DateFieldType) o;\n return Objects.equals(dateTimeFormatter.format(), that.dateTimeFormatter.format()) &&\n- Objects.equals(dateTimeFormatter.locale(), that.dateTimeFormatter.locale()) &&\n- Objects.equals(timeUnit, that.timeUnit);\n+ Objects.equals(dateTimeFormatter.locale(), that.dateTimeFormatter.locale());\n }\n \n @Override\n public int hashCode() {\n- return Objects.hash(super.hashCode(), dateTimeFormatter.format(), timeUnit);\n+ return Objects.hash(super.hashCode(), dateTimeFormatter.format(), dateTimeFormatter.locale());\n }\n \n @Override\n@@ -289,13 +280,12 @@ public void checkCompatibility(MappedFieldType fieldType, List<String> conflicts\n if (strict) {\n DateFieldType other = (DateFieldType)fieldType;\n if (Objects.equals(dateTimeFormatter().format(), other.dateTimeFormatter().format()) == false) {\n- conflicts.add(\"mapper [\" + name() + \"] is used by multiple types. Set update_all_types to true to update [format] across all types.\");\n+ conflicts.add(\"mapper [\" + name()\n+ + \"] is used by multiple types. Set update_all_types to true to update [format] across all types.\");\n }\n if (Objects.equals(dateTimeFormatter().locale(), other.dateTimeFormatter().locale()) == false) {\n- conflicts.add(\"mapper [\" + name() + \"] is used by multiple types. Set update_all_types to true to update [locale] across all types.\");\n- }\n- if (Objects.equals(timeUnit(), other.timeUnit()) == false) {\n- conflicts.add(\"mapper [\" + name() + \"] is used by multiple types. Set update_all_types to true to update [numeric_resolution] across all types.\");\n+ conflicts.add(\"mapper [\" + name()\n+ + \"] is used by multiple types. Set update_all_types to true to update [locale] across all types.\");\n }\n }\n }\n@@ -310,94 +300,111 @@ public void setDateTimeFormatter(FormatDateTimeFormatter dateTimeFormatter) {\n this.dateMathParser = new DateMathParser(dateTimeFormatter);\n }\n \n- public TimeUnit timeUnit() {\n- return timeUnit;\n- }\n-\n- public void setTimeUnit(TimeUnit timeUnit) {\n- checkIfFrozen();\n- this.timeUnit = timeUnit;\n- this.dateMathParser = new DateMathParser(dateTimeFormatter);\n- }\n-\n protected DateMathParser dateMathParser() {\n return dateMathParser;\n }\n \n- private long parseValue(Object value) {\n- if (value instanceof Number) {\n- return ((Number) value).longValue();\n- }\n- if (value instanceof BytesRef) {\n- return dateTimeFormatter().parser().parseMillis(((BytesRef) value).utf8ToString());\n- }\n- return dateTimeFormatter().parser().parseMillis(value.toString());\n- }\n-\n- protected long parseStringValue(String value) {\n+ long parse(String value) {\n return dateTimeFormatter().parser().parseMillis(value);\n }\n \n @Override\n- public BytesRef indexedValueForSearch(Object value) {\n- BytesRefBuilder bytesRef = new BytesRefBuilder();\n- LegacyNumericUtils.longToPrefixCoded(parseValue(value), 0, bytesRef); // 0 because of exact match\n- return bytesRef.get();\n+ public Query termQuery(Object value, @Nullable QueryShardContext context) {\n+ Query query = innerRangeQuery(value, value, true, true, null, null);\n+ if (boost() != 1f) {\n+ query = new BoostQuery(query, boost());\n+ }\n+ return query;\n }\n \n @Override\n- public Object valueForSearch(Object value) {\n- Long val = (Long) value;\n- if (val == null) {\n- return null;\n+ public Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions) {\n+ long baseLo = parseToMilliseconds(value, false, null, dateMathParser);\n+ long baseHi = parseToMilliseconds(value, true, null, dateMathParser);\n+ long delta;\n+ try {\n+ delta = fuzziness.asTimeValue().millis();\n+ } catch (Exception e) {\n+ // not a time format\n+ delta = fuzziness.asLong();\n }\n- return dateTimeFormatter().printer().print(val);\n+ return LongPoint.newRangeQuery(name(), baseLo - delta, baseHi + delta);\n }\n \n @Override\n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, null, null);\n }\n \n- @Override\n- public Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions) {\n- long iValue = parseValue(value);\n- long iSim;\n- try {\n- iSim = fuzziness.asTimeValue().millis();\n- } catch (Exception e) {\n- // not a time format\n- iSim = fuzziness.asLong();\n+ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n+ @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) {\n+ return new LateParsingQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser);\n+ }\n+\n+ Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n+ @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) {\n+ DateMathParser parser = forcedDateParser == null\n+ ? dateMathParser\n+ : forcedDateParser;\n+ long l, u;\n+ if (lowerTerm == null) {\n+ l = Long.MIN_VALUE;\n+ } else {\n+ l = parseToMilliseconds(lowerTerm, !includeLower, timeZone, parser);\n+ if (includeLower == false) {\n+ ++l;\n+ }\n }\n- return LegacyNumericRangeQuery.newLongRange(name(), numericPrecisionStep(),\n- iValue - iSim,\n- iValue + iSim,\n- true, true);\n+ if (upperTerm == null) {\n+ u = Long.MAX_VALUE;\n+ } else {\n+ u = parseToMilliseconds(upperTerm, includeUpper, timeZone, parser);\n+ if (includeUpper == false) {\n+ --u;\n+ }\n+ }\n+ return LongPoint.newRangeQuery(name(), l, u);\n }\n \n- @Override\n- public FieldStats stats(IndexReader reader) throws IOException {\n- int maxDoc = reader.maxDoc();\n- Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name());\n- if (terms == null) {\n- return null;\n+ public long parseToMilliseconds(Object value, boolean roundUp,\n+ @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) {\n+ DateMathParser dateParser = dateMathParser();\n+ if (forcedDateParser != null) {\n+ dateParser = forcedDateParser;\n+ }\n+\n+ String strValue;\n+ if (value instanceof BytesRef) {\n+ strValue = ((BytesRef) value).utf8ToString();\n+ } else {\n+ strValue = value.toString();\n }\n- long minValue = LegacyNumericUtils.getMinLong(terms);\n- long maxValue = LegacyNumericUtils.getMaxLong(terms);\n- return new FieldStats.Date(\n- maxDoc, terms.getDocCount(), terms.getSumDocFreq(), terms.getSumTotalTermFreq(), minValue, maxValue, dateTimeFormatter()\n- );\n+ return dateParser.parse(strValue, now(), roundUp, zone);\n }\n \n- public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) {\n- return new LateParsingQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser);\n+ private static Callable<Long> now() {\n+ return () -> {\n+ final SearchContext context = SearchContext.current();\n+ return context != null\n+ ? context.nowInMillis()\n+ : System.currentTimeMillis();\n+ };\n }\n \n- private Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) {\n- return LegacyNumericRangeQuery.newLongRange(name(), numericPrecisionStep(),\n- lowerTerm == null ? null : parseToMilliseconds(lowerTerm, !includeLower, timeZone, forcedDateParser == null ? dateMathParser : forcedDateParser),\n- upperTerm == null ? null : parseToMilliseconds(upperTerm, includeUpper, timeZone, forcedDateParser == null ? dateMathParser : forcedDateParser),\n- includeLower, includeUpper);\n+ @Override\n+ public FieldStats.Date stats(IndexReader reader) throws IOException {\n+ String field = name();\n+ long size = PointValues.size(reader, field);\n+ if (size == 0) {\n+ return null;\n+ }\n+ int docCount = PointValues.getDocCount(reader, field);\n+ byte[] min = PointValues.getMinPackedValue(reader, field);\n+ byte[] max = PointValues.getMaxPackedValue(reader, field);\n+ return new FieldStats.Date(reader.maxDoc(),docCount, -1L, size,\n+ LongPoint.decodeDimension(min, 0),\n+ LongPoint.decodeDimension(max, 0),\n+ dateTimeFormatter());\n }\n \n @Override\n@@ -409,14 +416,13 @@ public Relation isFieldWithinQuery(IndexReader reader,\n dateParser = this.dateMathParser;\n }\n \n- Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name());\n- if (terms == null) {\n- // no terms, so nothing matches\n+ if (PointValues.size(reader, name()) == 0) {\n+ // no points, so nothing matches\n return Relation.DISJOINT;\n }\n \n- long minValue = LegacyNumericUtils.getMinLong(terms);\n- long maxValue = LegacyNumericUtils.getMaxLong(terms);\n+ long minValue = LongPoint.decodeDimension(PointValues.getMinPackedValue(reader, name()), 0);\n+ long maxValue = LongPoint.decodeDimension(PointValues.getMaxPackedValue(reader, name()), 0);\n \n long fromInclusive = Long.MIN_VALUE;\n if (from != null) {\n@@ -449,31 +455,21 @@ public Relation isFieldWithinQuery(IndexReader reader,\n }\n }\n \n- public long parseToMilliseconds(Object value, boolean inclusive, @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) {\n- if (value instanceof Long) {\n- return ((Long) value).longValue();\n- }\n-\n- DateMathParser dateParser = dateMathParser();\n- if (forcedDateParser != null) {\n- dateParser = forcedDateParser;\n- }\n-\n- String strValue;\n- if (value instanceof BytesRef) {\n- strValue = ((BytesRef) value).utf8ToString();\n- } else {\n- strValue = value.toString();\n- }\n- return dateParser.parse(strValue, now(), inclusive, zone);\n- }\n-\n @Override\n public IndexFieldData.Builder fielddataBuilder() {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder().numericType(NumericType.LONG);\n }\n \n+ @Override\n+ public Object valueForSearch(Object value) {\n+ Long val = (Long) value;\n+ if (val == null) {\n+ return null;\n+ }\n+ return dateTimeFormatter().printer().print(val);\n+ }\n+\n @Override\n public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZone) {\n FormatDateTimeFormatter dateTimeFormatter = this.dateTimeFormatter;\n@@ -487,131 +483,147 @@ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZ\n }\n }\n \n- protected DateFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Explicit<Boolean> ignoreMalformed,Explicit<Boolean> coerce,\n- Settings indexSettings, MultiFields multiFields, CopyTo copyTo) {\n- super(simpleName, fieldType, defaultFieldType, ignoreMalformed, coerce, indexSettings, multiFields, copyTo);\n+ private Boolean includeInAll;\n+\n+ private Explicit<Boolean> ignoreMalformed;\n+\n+ private DateFieldMapper(\n+ String simpleName,\n+ MappedFieldType fieldType,\n+ MappedFieldType defaultFieldType,\n+ Explicit<Boolean> ignoreMalformed,\n+ Settings indexSettings,\n+ MultiFields multiFields,\n+ CopyTo copyTo) {\n+ super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo);\n+ this.ignoreMalformed = ignoreMalformed;\n }\n \n @Override\n public DateFieldType fieldType() {\n return (DateFieldType) super.fieldType();\n }\n \n- private static Callable<Long> now() {\n- return new Callable<Long>() {\n- @Override\n- public Long call() {\n- final SearchContext context = SearchContext.current();\n- return context != null\n- ? context.nowInMillis()\n- : System.currentTimeMillis();\n- }\n- };\n+ @Override\n+ protected String contentType() {\n+ return fieldType.typeName();\n }\n \n @Override\n- protected boolean customBoost() {\n- return true;\n+ protected DateFieldMapper clone() {\n+ return (DateFieldMapper) super.clone();\n }\n \n @Override\n- protected void innerParseCreateField(ParseContext context, List<Field> fields) throws IOException {\n- String dateAsString = null;\n- float boost = fieldType().boost();\n+ public Mapper includeInAll(Boolean includeInAll) {\n+ if (includeInAll != null) {\n+ DateFieldMapper clone = clone();\n+ clone.includeInAll = includeInAll;\n+ return clone;\n+ } else {\n+ return this;\n+ }\n+ }\n+\n+ @Override\n+ public Mapper includeInAllIfNotSet(Boolean includeInAll) {\n+ if (includeInAll != null && this.includeInAll == null) {\n+ DateFieldMapper clone = clone();\n+ clone.includeInAll = includeInAll;\n+ return clone;\n+ } else {\n+ return this;\n+ }\n+ }\n+\n+ @Override\n+ public Mapper unsetIncludeInAll() {\n+ if (includeInAll != null) {\n+ DateFieldMapper clone = clone();\n+ clone.includeInAll = null;\n+ return clone;\n+ } else {\n+ return this;\n+ }\n+ }\n+\n+ @Override\n+ protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n+ String dateAsString;\n if (context.externalValueSet()) {\n- Object externalValue = context.externalValue();\n- dateAsString = (String) externalValue;\n- if (dateAsString == null) {\n- dateAsString = fieldType().nullValueAsString();\n+ Object dateAsObject = context.externalValue();\n+ if (dateAsObject == null) {\n+ dateAsString = null;\n+ } else {\n+ dateAsString = dateAsObject.toString();\n }\n } else {\n- XContentParser parser = context.parser();\n- XContentParser.Token token = parser.currentToken();\n- if (token == XContentParser.Token.VALUE_NULL) {\n- dateAsString = fieldType().nullValueAsString();\n- } else if (token == XContentParser.Token.VALUE_NUMBER) {\n- dateAsString = parser.text();\n- } else if (token == XContentParser.Token.START_OBJECT\n- && Version.indexCreated(context.indexSettings()).before(Version.V_5_0_0_alpha1)) {\n- String currentFieldName = null;\n- while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n- if (token == XContentParser.Token.FIELD_NAME) {\n- currentFieldName = parser.currentName();\n- } else {\n- if (\"value\".equals(currentFieldName) || \"_value\".equals(currentFieldName)) {\n- if (token == XContentParser.Token.VALUE_NULL) {\n- dateAsString = fieldType().nullValueAsString();\n- } else {\n- dateAsString = parser.text();\n- }\n- } else if (\"boost\".equals(currentFieldName) || \"_boost\".equals(currentFieldName)) {\n- boost = parser.floatValue();\n- } else {\n- throw new IllegalArgumentException(\"unknown property [\" + currentFieldName + \"]\");\n- }\n- }\n- }\n+ dateAsString = context.parser().text();\n+ }\n+\n+ if (dateAsString == null) {\n+ dateAsString = fieldType().nullValueAsString();\n+ }\n+\n+ if (dateAsString == null) {\n+ return;\n+ }\n+\n+ long timestamp;\n+ try {\n+ timestamp = fieldType().parse(dateAsString);\n+ } catch (IllegalArgumentException e) {\n+ if (ignoreMalformed.value()) {\n+ return;\n } else {\n- dateAsString = parser.text();\n+ throw e;\n }\n }\n \n- Long value = null;\n- if (dateAsString != null) {\n- if (context.includeInAll(includeInAll, this)) {\n- context.allEntries().addText(fieldType().name(), dateAsString, boost);\n- }\n- value = fieldType().parseStringValue(dateAsString);\n+ if (context.includeInAll(includeInAll, this)) {\n+ context.allEntries().addText(fieldType().name(), dateAsString, fieldType().boost());\n }\n \n- if (value != null) {\n- if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) {\n- CustomLongNumericField field = new CustomLongNumericField(value, fieldType());\n- if (boost != 1f && Version.indexCreated(context.indexSettings()).before(Version.V_5_0_0_alpha1)) {\n- field.setBoost(boost);\n- }\n- fields.add(field);\n- }\n- if (fieldType().hasDocValues()) {\n- addDocValue(context, fields, value);\n- }\n+ if (fieldType().indexOptions() != IndexOptions.NONE) {\n+ fields.add(new LongPoint(fieldType().name(), timestamp));\n+ }\n+ if (fieldType().hasDocValues()) {\n+ fields.add(new SortedNumericDocValuesField(fieldType().name(), timestamp));\n+ }\n+ if (fieldType().stored()) {\n+ fields.add(new StoredField(fieldType().name(), timestamp));\n }\n }\n \n @Override\n- protected String contentType() {\n- return CONTENT_TYPE;\n+ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n+ super.doMerge(mergeWith, updateAllTypes);\n+ DateFieldMapper other = (DateFieldMapper) mergeWith;\n+ this.includeInAll = other.includeInAll;\n+ if (other.ignoreMalformed.explicit()) {\n+ this.ignoreMalformed = other.ignoreMalformed;\n+ }\n }\n \n @Override\n protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n super.doXContentBody(builder, includeDefaults, params);\n \n- if (includeDefaults || fieldType().numericPrecisionStep() != Defaults.PRECISION_STEP_64_BIT) {\n- builder.field(\"precision_step\", fieldType().numericPrecisionStep());\n- }\n- builder.field(\"format\", fieldType().dateTimeFormatter().format());\n- if (includeDefaults || fieldType().nullValueAsString() != null) {\n- builder.field(\"null_value\", fieldType().nullValueAsString());\n+ if (includeDefaults || ignoreMalformed.explicit()) {\n+ builder.field(\"ignore_malformed\", ignoreMalformed.value());\n }\n if (includeInAll != null) {\n builder.field(\"include_in_all\", includeInAll);\n } else if (includeDefaults) {\n builder.field(\"include_in_all\", false);\n }\n-\n- if (includeDefaults || fieldType().timeUnit() != Defaults.TIME_UNIT) {\n- builder.field(\"numeric_resolution\", fieldType().timeUnit().name().toLowerCase(Locale.ROOT));\n+ if (includeDefaults\n+ || fieldType().dateTimeFormatter().format().equals(DEFAULT_DATE_TIME_FORMATTER.format()) == false) {\n+ builder.field(\"format\", fieldType().dateTimeFormatter().format());\n }\n- // only serialize locale if needed, ROOT is the default, so no need to serialize that case as well...\n- if (fieldType().dateTimeFormatter().locale() != null && fieldType().dateTimeFormatter().locale() != Locale.ROOT) {\n+ if (includeDefaults\n+ || fieldType().dateTimeFormatter().locale() != Locale.ROOT) {\n builder.field(\"locale\", fieldType().dateTimeFormatter().locale());\n- } else if (includeDefaults) {\n- if (fieldType().dateTimeFormatter().locale() == null) {\n- builder.field(\"locale\", Locale.ROOT);\n- } else {\n- builder.field(\"locale\", fieldType().dateTimeFormatter().locale());\n- }\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -22,30 +22,33 @@\n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.SortedSetDocValuesField;\n import org.apache.lucene.index.IndexOptions;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.MultiTermQuery;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.RegexpQuery;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.support.XContentMapValues;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n-import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType;\n import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData;\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n+import org.elasticsearch.index.query.QueryShardContext;\n \n import java.io.IOException;\n import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.index.mapper.core.TypeParsers.parseField;\n-import static org.elasticsearch.index.mapper.core.TypeParsers.parseMultiField;\n \n /**\n * A field mapper for keywords. This mapper accepts strings and indexes them as-is.\n@@ -170,6 +173,16 @@ public IndexFieldData.Builder fielddataBuilder() {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder();\n }\n+\n+ @Override\n+ public Query regexpQuery(String value, int flags, int maxDeterminizedStates,\n+ @Nullable MultiTermQuery.RewriteMethod method, @Nullable QueryShardContext context) {\n+ RegexpQuery query = new RegexpQuery(new Term(name(), indexedValueForSearch(value)), flags, maxDeterminizedStates);\n+ if (method != null) {\n+ query.setRewriteMethod(method);\n+ }\n+ return query;\n+ }\n }\n \n private Boolean includeInAll;", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/KeywordFieldMapper.java", "status": "modified" }, { "diff": "@@ -0,0 +1,617 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.core;\n+\n+import org.apache.lucene.document.Field;\n+import org.apache.lucene.index.IndexOptions;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.Terms;\n+import org.apache.lucene.search.LegacyNumericRangeQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.util.BytesRef;\n+import org.apache.lucene.util.BytesRefBuilder;\n+import org.apache.lucene.util.LegacyNumericUtils;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.fieldstats.FieldStats;\n+import org.elasticsearch.common.Explicit;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.joda.DateMathParser;\n+import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n+import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.Fuzziness;\n+import org.elasticsearch.common.util.LocaleUtils;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n+import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType;\n+import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.Mapper;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.ParseContext;\n+import org.elasticsearch.index.mapper.core.LegacyLongFieldMapper.CustomLongNumericField;\n+import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.internal.SearchContext;\n+import org.joda.time.DateTimeZone;\n+\n+import java.io.IOException;\n+import java.util.Iterator;\n+import java.util.List;\n+import java.util.Locale;\n+import java.util.Map;\n+import java.util.Objects;\n+import java.util.concurrent.Callable;\n+import java.util.concurrent.TimeUnit;\n+\n+import static org.elasticsearch.index.mapper.core.TypeParsers.parseDateTimeFormatter;\n+import static org.elasticsearch.index.mapper.core.TypeParsers.parseNumberField;\n+\n+public class LegacyDateFieldMapper extends LegacyNumberFieldMapper {\n+\n+ public static final String CONTENT_TYPE = \"date\";\n+\n+ public static class Defaults extends LegacyNumberFieldMapper.Defaults {\n+ public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(\"strict_date_optional_time||epoch_millis\", Locale.ROOT);\n+ public static final TimeUnit TIME_UNIT = TimeUnit.MILLISECONDS;\n+ public static final DateFieldType FIELD_TYPE = new DateFieldType();\n+\n+ static {\n+ FIELD_TYPE.freeze();\n+ }\n+\n+ public static final String NULL_VALUE = null;\n+ }\n+\n+ public static class Builder extends LegacyNumberFieldMapper.Builder<Builder, LegacyDateFieldMapper> {\n+\n+ protected String nullValue = Defaults.NULL_VALUE;\n+\n+ private Locale locale;\n+\n+ public Builder(String name) {\n+ super(name, Defaults.FIELD_TYPE, Defaults.PRECISION_STEP_64_BIT);\n+ builder = this;\n+ // do *NOT* rely on the default locale\n+ locale = Locale.ROOT;\n+ }\n+\n+ @Override\n+ public DateFieldType fieldType() {\n+ return (DateFieldType)fieldType;\n+ }\n+\n+ public Builder timeUnit(TimeUnit timeUnit) {\n+ fieldType().setTimeUnit(timeUnit);\n+ return this;\n+ }\n+\n+ public Builder nullValue(String nullValue) {\n+ this.nullValue = nullValue;\n+ return this;\n+ }\n+\n+ public Builder dateTimeFormatter(FormatDateTimeFormatter dateTimeFormatter) {\n+ fieldType().setDateTimeFormatter(dateTimeFormatter);\n+ return this;\n+ }\n+\n+ @Override\n+ public LegacyDateFieldMapper build(BuilderContext context) {\n+ if (context.indexCreatedVersion().onOrAfter(Version.V_5_0_0)) {\n+ throw new IllegalStateException(\"Cannot use legacy numeric types after 5.0\");\n+ }\n+ setupFieldType(context);\n+ fieldType.setNullValue(nullValue);\n+ LegacyDateFieldMapper fieldMapper = new LegacyDateFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context),\n+ coerce(context), context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo);\n+ return (LegacyDateFieldMapper) fieldMapper.includeInAll(includeInAll);\n+ }\n+\n+ @Override\n+ protected void setupFieldType(BuilderContext context) {\n+ FormatDateTimeFormatter dateTimeFormatter = fieldType().dateTimeFormatter;\n+ if (!locale.equals(dateTimeFormatter.locale())) {\n+ fieldType().setDateTimeFormatter(new FormatDateTimeFormatter(dateTimeFormatter.format(), dateTimeFormatter.parser(), dateTimeFormatter.printer(), locale));\n+ }\n+ super.setupFieldType(context);\n+ }\n+\n+ public Builder locale(Locale locale) {\n+ this.locale = locale;\n+ return this;\n+ }\n+\n+ @Override\n+ protected int maxPrecisionStep() {\n+ return 64;\n+ }\n+ }\n+\n+ public static class TypeParser implements Mapper.TypeParser {\n+ @Override\n+ public Mapper.Builder<?, ?> parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n+ LegacyDateFieldMapper.Builder builder = new LegacyDateFieldMapper.Builder(name);\n+ parseNumberField(builder, name, node, parserContext);\n+ boolean configuredFormat = false;\n+ for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n+ Map.Entry<String, Object> entry = iterator.next();\n+ String propName = Strings.toUnderscoreCase(entry.getKey());\n+ Object propNode = entry.getValue();\n+ if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n+ builder.nullValue(propNode.toString());\n+ iterator.remove();\n+ } else if (propName.equals(\"format\")) {\n+ builder.dateTimeFormatter(parseDateTimeFormatter(propNode));\n+ configuredFormat = true;\n+ iterator.remove();\n+ } else if (propName.equals(\"numeric_resolution\")) {\n+ builder.timeUnit(TimeUnit.valueOf(propNode.toString().toUpperCase(Locale.ROOT)));\n+ iterator.remove();\n+ } else if (propName.equals(\"locale\")) {\n+ builder.locale(LocaleUtils.parse(propNode.toString()));\n+ iterator.remove();\n+ }\n+ }\n+ if (!configuredFormat) {\n+ builder.dateTimeFormatter(Defaults.DATE_TIME_FORMATTER);\n+ }\n+ return builder;\n+ }\n+ }\n+\n+ public static class DateFieldType extends NumberFieldType {\n+\n+ final class LateParsingQuery extends Query {\n+\n+ final Object lowerTerm;\n+ final Object upperTerm;\n+ final boolean includeLower;\n+ final boolean includeUpper;\n+ final DateTimeZone timeZone;\n+ final DateMathParser forcedDateParser;\n+\n+ public LateParsingQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, DateTimeZone timeZone, DateMathParser forcedDateParser) {\n+ this.lowerTerm = lowerTerm;\n+ this.upperTerm = upperTerm;\n+ this.includeLower = includeLower;\n+ this.includeUpper = includeUpper;\n+ this.timeZone = timeZone;\n+ this.forcedDateParser = forcedDateParser;\n+ }\n+\n+ @Override\n+ public Query rewrite(IndexReader reader) throws IOException {\n+ Query rewritten = super.rewrite(reader);\n+ if (rewritten != this) {\n+ return rewritten;\n+ }\n+ return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser);\n+ }\n+\n+ // Even though we only cache rewritten queries it is good to let all queries implement hashCode() and equals():\n+ @Override\n+ public boolean equals(Object o) {\n+ if (this == o) return true;\n+ if (!super.equals(o)) return false;\n+\n+ LateParsingQuery that = (LateParsingQuery) o;\n+ if (includeLower != that.includeLower) return false;\n+ if (includeUpper != that.includeUpper) return false;\n+ if (lowerTerm != null ? !lowerTerm.equals(that.lowerTerm) : that.lowerTerm != null) return false;\n+ if (upperTerm != null ? !upperTerm.equals(that.upperTerm) : that.upperTerm != null) return false;\n+ if (timeZone != null ? !timeZone.equals(that.timeZone) : that.timeZone != null) return false;\n+\n+ return true;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return Objects.hash(super.hashCode(), lowerTerm, upperTerm, includeLower, includeUpper, timeZone);\n+ }\n+\n+ @Override\n+ public String toString(String s) {\n+ final StringBuilder sb = new StringBuilder();\n+ return sb.append(name()).append(':')\n+ .append(includeLower ? '[' : '{')\n+ .append((lowerTerm == null) ? \"*\" : lowerTerm.toString())\n+ .append(\" TO \")\n+ .append((upperTerm == null) ? \"*\" : upperTerm.toString())\n+ .append(includeUpper ? ']' : '}')\n+ .toString();\n+ }\n+ }\n+\n+ protected FormatDateTimeFormatter dateTimeFormatter = Defaults.DATE_TIME_FORMATTER;\n+ protected TimeUnit timeUnit = Defaults.TIME_UNIT;\n+ protected DateMathParser dateMathParser = new DateMathParser(dateTimeFormatter);\n+\n+ public DateFieldType() {\n+ super(LegacyNumericType.LONG);\n+ }\n+\n+ protected DateFieldType(DateFieldType ref) {\n+ super(ref);\n+ this.dateTimeFormatter = ref.dateTimeFormatter;\n+ this.timeUnit = ref.timeUnit;\n+ this.dateMathParser = ref.dateMathParser;\n+ }\n+\n+ @Override\n+ public DateFieldType clone() {\n+ return new DateFieldType(this);\n+ }\n+\n+ @Override\n+ public boolean equals(Object o) {\n+ if (!super.equals(o)) return false;\n+ DateFieldType that = (DateFieldType) o;\n+ return Objects.equals(dateTimeFormatter.format(), that.dateTimeFormatter.format()) &&\n+ Objects.equals(dateTimeFormatter.locale(), that.dateTimeFormatter.locale()) &&\n+ Objects.equals(timeUnit, that.timeUnit);\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return Objects.hash(super.hashCode(), dateTimeFormatter.format(), timeUnit);\n+ }\n+\n+ @Override\n+ public String typeName() {\n+ return CONTENT_TYPE;\n+ }\n+\n+ @Override\n+ public void checkCompatibility(MappedFieldType fieldType, List<String> conflicts, boolean strict) {\n+ super.checkCompatibility(fieldType, conflicts, strict);\n+ if (strict) {\n+ DateFieldType other = (DateFieldType)fieldType;\n+ if (Objects.equals(dateTimeFormatter().format(), other.dateTimeFormatter().format()) == false) {\n+ conflicts.add(\"mapper [\" + name() + \"] is used by multiple types. Set update_all_types to true to update [format] across all types.\");\n+ }\n+ if (Objects.equals(dateTimeFormatter().locale(), other.dateTimeFormatter().locale()) == false) {\n+ conflicts.add(\"mapper [\" + name() + \"] is used by multiple types. Set update_all_types to true to update [locale] across all types.\");\n+ }\n+ if (Objects.equals(timeUnit(), other.timeUnit()) == false) {\n+ conflicts.add(\"mapper [\" + name() + \"] is used by multiple types. Set update_all_types to true to update [numeric_resolution] across all types.\");\n+ }\n+ }\n+ }\n+\n+ public FormatDateTimeFormatter dateTimeFormatter() {\n+ return dateTimeFormatter;\n+ }\n+\n+ public void setDateTimeFormatter(FormatDateTimeFormatter dateTimeFormatter) {\n+ checkIfFrozen();\n+ this.dateTimeFormatter = dateTimeFormatter;\n+ this.dateMathParser = new DateMathParser(dateTimeFormatter);\n+ }\n+\n+ public TimeUnit timeUnit() {\n+ return timeUnit;\n+ }\n+\n+ public void setTimeUnit(TimeUnit timeUnit) {\n+ checkIfFrozen();\n+ this.timeUnit = timeUnit;\n+ this.dateMathParser = new DateMathParser(dateTimeFormatter);\n+ }\n+\n+ protected DateMathParser dateMathParser() {\n+ return dateMathParser;\n+ }\n+\n+ private long parseValue(Object value) {\n+ if (value instanceof Number) {\n+ return ((Number) value).longValue();\n+ }\n+ if (value instanceof BytesRef) {\n+ return dateTimeFormatter().parser().parseMillis(((BytesRef) value).utf8ToString());\n+ }\n+ return dateTimeFormatter().parser().parseMillis(value.toString());\n+ }\n+\n+ protected long parseStringValue(String value) {\n+ return dateTimeFormatter().parser().parseMillis(value);\n+ }\n+\n+ @Override\n+ public BytesRef indexedValueForSearch(Object value) {\n+ BytesRefBuilder bytesRef = new BytesRefBuilder();\n+ LegacyNumericUtils.longToPrefixCoded(parseValue(value), 0, bytesRef); // 0 because of exact match\n+ return bytesRef.get();\n+ }\n+\n+ @Override\n+ public Object valueForSearch(Object value) {\n+ Long val = (Long) value;\n+ if (val == null) {\n+ return null;\n+ }\n+ return dateTimeFormatter().printer().print(val);\n+ }\n+\n+ @Override\n+ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n+ return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, null, null);\n+ }\n+\n+ @Override\n+ public Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions) {\n+ long iValue = parseValue(value);\n+ long iSim;\n+ try {\n+ iSim = fuzziness.asTimeValue().millis();\n+ } catch (Exception e) {\n+ // not a time format\n+ iSim = fuzziness.asLong();\n+ }\n+ return LegacyNumericRangeQuery.newLongRange(name(), numericPrecisionStep(),\n+ iValue - iSim,\n+ iValue + iSim,\n+ true, true);\n+ }\n+\n+ @Override\n+ public FieldStats stats(IndexReader reader) throws IOException {\n+ int maxDoc = reader.maxDoc();\n+ Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name());\n+ if (terms == null) {\n+ return null;\n+ }\n+ long minValue = LegacyNumericUtils.getMinLong(terms);\n+ long maxValue = LegacyNumericUtils.getMaxLong(terms);\n+ return new FieldStats.Date(\n+ maxDoc, terms.getDocCount(), terms.getSumDocFreq(), terms.getSumTotalTermFreq(), minValue, maxValue, dateTimeFormatter()\n+ );\n+ }\n+\n+ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) {\n+ return new LateParsingQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser);\n+ }\n+\n+ private Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser) {\n+ return LegacyNumericRangeQuery.newLongRange(name(), numericPrecisionStep(),\n+ lowerTerm == null ? null : parseToMilliseconds(lowerTerm, !includeLower, timeZone, forcedDateParser == null ? dateMathParser : forcedDateParser),\n+ upperTerm == null ? null : parseToMilliseconds(upperTerm, includeUpper, timeZone, forcedDateParser == null ? dateMathParser : forcedDateParser),\n+ includeLower, includeUpper);\n+ }\n+\n+ @Override\n+ public Relation isFieldWithinQuery(IndexReader reader,\n+ Object from, Object to,\n+ boolean includeLower, boolean includeUpper,\n+ DateTimeZone timeZone, DateMathParser dateParser) throws IOException {\n+ if (dateParser == null) {\n+ dateParser = this.dateMathParser;\n+ }\n+\n+ Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name());\n+ if (terms == null) {\n+ // no terms, so nothing matches\n+ return Relation.DISJOINT;\n+ }\n+\n+ long minValue = LegacyNumericUtils.getMinLong(terms);\n+ long maxValue = LegacyNumericUtils.getMaxLong(terms);\n+\n+ long fromInclusive = Long.MIN_VALUE;\n+ if (from != null) {\n+ fromInclusive = parseToMilliseconds(from, !includeLower, timeZone, dateParser);\n+ if (includeLower == false) {\n+ if (fromInclusive == Long.MAX_VALUE) {\n+ return Relation.DISJOINT;\n+ }\n+ ++fromInclusive;\n+ }\n+ }\n+\n+ long toInclusive = Long.MAX_VALUE;\n+ if (to != null) {\n+ toInclusive = parseToMilliseconds(to, includeUpper, timeZone, dateParser);\n+ if (includeUpper == false) {\n+ if (toInclusive == Long.MIN_VALUE) {\n+ return Relation.DISJOINT;\n+ }\n+ --toInclusive;\n+ }\n+ }\n+\n+ if (minValue >= fromInclusive && maxValue <= toInclusive) {\n+ return Relation.WITHIN;\n+ } else if (maxValue < fromInclusive || minValue > toInclusive) {\n+ return Relation.DISJOINT;\n+ } else {\n+ return Relation.INTERSECTS;\n+ }\n+ }\n+\n+ public long parseToMilliseconds(Object value, boolean inclusive, @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) {\n+ if (value instanceof Long) {\n+ return ((Long) value).longValue();\n+ }\n+\n+ DateMathParser dateParser = dateMathParser();\n+ if (forcedDateParser != null) {\n+ dateParser = forcedDateParser;\n+ }\n+\n+ String strValue;\n+ if (value instanceof BytesRef) {\n+ strValue = ((BytesRef) value).utf8ToString();\n+ } else {\n+ strValue = value.toString();\n+ }\n+ return dateParser.parse(strValue, now(), inclusive, zone);\n+ }\n+\n+ @Override\n+ public IndexFieldData.Builder fielddataBuilder() {\n+ failIfNoDocValues();\n+ return new DocValuesIndexFieldData.Builder().numericType(NumericType.LONG);\n+ }\n+\n+ @Override\n+ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZone) {\n+ FormatDateTimeFormatter dateTimeFormatter = this.dateTimeFormatter;\n+ if (format != null) {\n+ dateTimeFormatter = Joda.forPattern(format);\n+ }\n+ if (timeZone == null) {\n+ timeZone = DateTimeZone.UTC;\n+ }\n+ return new DocValueFormat.DateTime(dateTimeFormatter, timeZone);\n+ }\n+ }\n+\n+ protected LegacyDateFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Explicit<Boolean> ignoreMalformed,Explicit<Boolean> coerce,\n+ Settings indexSettings, MultiFields multiFields, CopyTo copyTo) {\n+ super(simpleName, fieldType, defaultFieldType, ignoreMalformed, coerce, indexSettings, multiFields, copyTo);\n+ }\n+\n+ @Override\n+ public DateFieldType fieldType() {\n+ return (DateFieldType) super.fieldType();\n+ }\n+\n+ private static Callable<Long> now() {\n+ return new Callable<Long>() {\n+ @Override\n+ public Long call() {\n+ final SearchContext context = SearchContext.current();\n+ return context != null\n+ ? context.nowInMillis()\n+ : System.currentTimeMillis();\n+ }\n+ };\n+ }\n+\n+ @Override\n+ protected boolean customBoost() {\n+ return true;\n+ }\n+\n+ @Override\n+ protected void innerParseCreateField(ParseContext context, List<Field> fields) throws IOException {\n+ String dateAsString = null;\n+ float boost = fieldType().boost();\n+ if (context.externalValueSet()) {\n+ Object externalValue = context.externalValue();\n+ dateAsString = (String) externalValue;\n+ if (dateAsString == null) {\n+ dateAsString = fieldType().nullValueAsString();\n+ }\n+ } else {\n+ XContentParser parser = context.parser();\n+ XContentParser.Token token = parser.currentToken();\n+ if (token == XContentParser.Token.VALUE_NULL) {\n+ dateAsString = fieldType().nullValueAsString();\n+ } else if (token == XContentParser.Token.VALUE_NUMBER) {\n+ dateAsString = parser.text();\n+ } else if (token == XContentParser.Token.START_OBJECT\n+ && Version.indexCreated(context.indexSettings()).before(Version.V_5_0_0_alpha1)) {\n+ String currentFieldName = null;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (token == XContentParser.Token.FIELD_NAME) {\n+ currentFieldName = parser.currentName();\n+ } else {\n+ if (\"value\".equals(currentFieldName) || \"_value\".equals(currentFieldName)) {\n+ if (token == XContentParser.Token.VALUE_NULL) {\n+ dateAsString = fieldType().nullValueAsString();\n+ } else {\n+ dateAsString = parser.text();\n+ }\n+ } else if (\"boost\".equals(currentFieldName) || \"_boost\".equals(currentFieldName)) {\n+ boost = parser.floatValue();\n+ } else {\n+ throw new IllegalArgumentException(\"unknown property [\" + currentFieldName + \"]\");\n+ }\n+ }\n+ }\n+ } else {\n+ dateAsString = parser.text();\n+ }\n+ }\n+\n+ Long value = null;\n+ if (dateAsString != null) {\n+ if (context.includeInAll(includeInAll, this)) {\n+ context.allEntries().addText(fieldType().name(), dateAsString, boost);\n+ }\n+ value = fieldType().parseStringValue(dateAsString);\n+ }\n+\n+ if (value != null) {\n+ if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) {\n+ CustomLongNumericField field = new CustomLongNumericField(value, fieldType());\n+ if (boost != 1f && Version.indexCreated(context.indexSettings()).before(Version.V_5_0_0_alpha1)) {\n+ field.setBoost(boost);\n+ }\n+ fields.add(field);\n+ }\n+ if (fieldType().hasDocValues()) {\n+ addDocValue(context, fields, value);\n+ }\n+ }\n+ }\n+\n+ @Override\n+ protected String contentType() {\n+ return CONTENT_TYPE;\n+ }\n+\n+ @Override\n+ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n+ super.doXContentBody(builder, includeDefaults, params);\n+\n+ if (includeDefaults || fieldType().numericPrecisionStep() != Defaults.PRECISION_STEP_64_BIT) {\n+ builder.field(\"precision_step\", fieldType().numericPrecisionStep());\n+ }\n+ builder.field(\"format\", fieldType().dateTimeFormatter().format());\n+ if (includeDefaults || fieldType().nullValueAsString() != null) {\n+ builder.field(\"null_value\", fieldType().nullValueAsString());\n+ }\n+ if (includeInAll != null) {\n+ builder.field(\"include_in_all\", includeInAll);\n+ } else if (includeDefaults) {\n+ builder.field(\"include_in_all\", false);\n+ }\n+\n+ if (includeDefaults || fieldType().timeUnit() != Defaults.TIME_UNIT) {\n+ builder.field(\"numeric_resolution\", fieldType().timeUnit().name().toLowerCase(Locale.ROOT));\n+ }\n+ // only serialize locale if needed, ROOT is the default, so no need to serialize that case as well...\n+ if (fieldType().dateTimeFormatter().locale() != null && fieldType().dateTimeFormatter().locale() != Locale.ROOT) {\n+ builder.field(\"locale\", fieldType().dateTimeFormatter().locale());\n+ } else if (includeDefaults) {\n+ if (fieldType().dateTimeFormatter().locale() == null) {\n+ builder.field(\"locale\", Locale.ROOT);\n+ } else {\n+ builder.field(\"locale\", fieldType().dateTimeFormatter().locale());\n+ }\n+ }\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/LegacyDateFieldMapper.java", "status": "added" }, { "diff": "@@ -0,0 +1,366 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.core;\n+\n+import java.io.IOException;\n+import java.io.Reader;\n+import java.util.List;\n+\n+import org.apache.lucene.analysis.LegacyNumericTokenStream;\n+import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.index.IndexOptions;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.Explicit;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.settings.Setting;\n+import org.elasticsearch.common.settings.Setting.Property;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.Fuzziness;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.Mapper;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.ParseContext;\n+import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n+import org.elasticsearch.search.DocValueFormat;\n+import org.joda.time.DateTimeZone;\n+\n+/**\n+ *\n+ */\n+public abstract class LegacyNumberFieldMapper extends FieldMapper implements AllFieldMapper.IncludeInAll {\n+ // this is private since it has a different default\n+ private static final Setting<Boolean> COERCE_SETTING =\n+ Setting.boolSetting(\"index.mapping.coerce\", true, Property.IndexScope);\n+\n+ public static class Defaults {\n+\n+ public static final int PRECISION_STEP_8_BIT = Integer.MAX_VALUE; // 1tpv: 256 terms at most, not useful\n+ public static final int PRECISION_STEP_16_BIT = 8; // 2tpv\n+ public static final int PRECISION_STEP_32_BIT = 8; // 4tpv\n+ public static final int PRECISION_STEP_64_BIT = 16; // 4tpv\n+\n+ public static final Explicit<Boolean> IGNORE_MALFORMED = new Explicit<>(false, false);\n+ public static final Explicit<Boolean> COERCE = new Explicit<>(true, false);\n+ }\n+\n+ public abstract static class Builder<T extends Builder, Y extends LegacyNumberFieldMapper> extends FieldMapper.Builder<T, Y> {\n+\n+ private Boolean ignoreMalformed;\n+\n+ private Boolean coerce;\n+\n+ public Builder(String name, MappedFieldType fieldType, int defaultPrecisionStep) {\n+ super(name, fieldType, fieldType);\n+ this.fieldType.setNumericPrecisionStep(defaultPrecisionStep);\n+ }\n+\n+ public T precisionStep(int precisionStep) {\n+ fieldType.setNumericPrecisionStep(precisionStep);\n+ return builder;\n+ }\n+\n+ public T ignoreMalformed(boolean ignoreMalformed) {\n+ this.ignoreMalformed = ignoreMalformed;\n+ return builder;\n+ }\n+\n+ protected Explicit<Boolean> ignoreMalformed(BuilderContext context) {\n+ if (ignoreMalformed != null) {\n+ return new Explicit<>(ignoreMalformed, true);\n+ }\n+ if (context.indexSettings() != null) {\n+ return new Explicit<>(IGNORE_MALFORMED_SETTING.get(context.indexSettings()), false);\n+ }\n+ return Defaults.IGNORE_MALFORMED;\n+ }\n+\n+ public T coerce(boolean coerce) {\n+ this.coerce = coerce;\n+ return builder;\n+ }\n+\n+ protected Explicit<Boolean> coerce(BuilderContext context) {\n+ if (coerce != null) {\n+ return new Explicit<>(coerce, true);\n+ }\n+ if (context.indexSettings() != null) {\n+ return new Explicit<>(COERCE_SETTING.get(context.indexSettings()), false);\n+ }\n+ return Defaults.COERCE;\n+ }\n+\n+ protected void setupFieldType(BuilderContext context) {\n+ super.setupFieldType(context);\n+ int precisionStep = fieldType.numericPrecisionStep();\n+ if (precisionStep <= 0 || precisionStep >= maxPrecisionStep()) {\n+ fieldType.setNumericPrecisionStep(Integer.MAX_VALUE);\n+ }\n+ }\n+\n+ protected abstract int maxPrecisionStep();\n+ }\n+\n+ public static abstract class NumberFieldType extends MappedFieldType {\n+\n+ public NumberFieldType(LegacyNumericType numericType) {\n+ setTokenized(false);\n+ setOmitNorms(true);\n+ setIndexOptions(IndexOptions.DOCS);\n+ setStoreTermVectors(false);\n+ setNumericType(numericType);\n+ }\n+\n+ protected NumberFieldType(NumberFieldType ref) {\n+ super(ref);\n+ }\n+\n+ @Override\n+ public void checkCompatibility(MappedFieldType other,\n+ List<String> conflicts, boolean strict) {\n+ super.checkCompatibility(other, conflicts, strict);\n+ if (numericPrecisionStep() != other.numericPrecisionStep()) {\n+ conflicts.add(\"mapper [\" + name() + \"] has different [precision_step] values\");\n+ }\n+ }\n+\n+ public abstract NumberFieldType clone();\n+\n+ @Override\n+ public abstract Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions);\n+\n+ @Override\n+ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZone) {\n+ if (timeZone != null) {\n+ throw new IllegalArgumentException(\"Field [\" + name() + \"] of type [\" + typeName() + \"] does not support custom time zones\");\n+ }\n+ if (format == null) {\n+ return DocValueFormat.RAW;\n+ } else {\n+ return new DocValueFormat.Decimal(format);\n+ }\n+ }\n+ }\n+\n+ protected Boolean includeInAll;\n+\n+ protected Explicit<Boolean> ignoreMalformed;\n+\n+ protected Explicit<Boolean> coerce;\n+\n+ protected LegacyNumberFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType,\n+ Explicit<Boolean> ignoreMalformed, Explicit<Boolean> coerce, Settings indexSettings,\n+ MultiFields multiFields, CopyTo copyTo) {\n+ super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo);\n+ assert fieldType.tokenized() == false;\n+ this.ignoreMalformed = ignoreMalformed;\n+ this.coerce = coerce;\n+ }\n+\n+ @Override\n+ protected LegacyNumberFieldMapper clone() {\n+ return (LegacyNumberFieldMapper) super.clone();\n+ }\n+\n+ @Override\n+ public Mapper includeInAll(Boolean includeInAll) {\n+ if (includeInAll != null) {\n+ LegacyNumberFieldMapper clone = clone();\n+ clone.includeInAll = includeInAll;\n+ return clone;\n+ } else {\n+ return this;\n+ }\n+ }\n+\n+ @Override\n+ public Mapper includeInAllIfNotSet(Boolean includeInAll) {\n+ if (includeInAll != null && this.includeInAll == null) {\n+ LegacyNumberFieldMapper clone = clone();\n+ clone.includeInAll = includeInAll;\n+ return clone;\n+ } else {\n+ return this;\n+ }\n+ }\n+\n+ @Override\n+ public Mapper unsetIncludeInAll() {\n+ if (includeInAll != null) {\n+ LegacyNumberFieldMapper clone = clone();\n+ clone.includeInAll = null;\n+ return clone;\n+ } else {\n+ return this;\n+ }\n+ }\n+\n+ @Override\n+ protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n+ RuntimeException e = null;\n+ try {\n+ innerParseCreateField(context, fields);\n+ } catch (IllegalArgumentException e1) {\n+ e = e1;\n+ } catch (MapperParsingException e2) {\n+ e = e2;\n+ }\n+\n+ if (e != null && !ignoreMalformed.value()) {\n+ throw e;\n+ }\n+ }\n+\n+ protected abstract void innerParseCreateField(ParseContext context, List<Field> fields) throws IOException;\n+\n+ protected final void addDocValue(ParseContext context, List<Field> fields, long value) {\n+ fields.add(new SortedNumericDocValuesField(fieldType().name(), value));\n+ }\n+\n+ /**\n+ * Converts an object value into a double\n+ */\n+ public static double parseDoubleValue(Object value) {\n+ if (value instanceof Number) {\n+ return ((Number) value).doubleValue();\n+ }\n+\n+ if (value instanceof BytesRef) {\n+ return Double.parseDouble(((BytesRef) value).utf8ToString());\n+ }\n+\n+ return Double.parseDouble(value.toString());\n+ }\n+\n+ /**\n+ * Converts an object value into a long\n+ */\n+ public static long parseLongValue(Object value) {\n+ if (value instanceof Number) {\n+ return ((Number) value).longValue();\n+ }\n+\n+ if (value instanceof BytesRef) {\n+ return Long.parseLong(((BytesRef) value).utf8ToString());\n+ }\n+\n+ return Long.parseLong(value.toString());\n+ }\n+\n+ @Override\n+ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n+ super.doMerge(mergeWith, updateAllTypes);\n+ LegacyNumberFieldMapper nfmMergeWith = (LegacyNumberFieldMapper) mergeWith;\n+\n+ this.includeInAll = nfmMergeWith.includeInAll;\n+ if (nfmMergeWith.ignoreMalformed.explicit()) {\n+ this.ignoreMalformed = nfmMergeWith.ignoreMalformed;\n+ }\n+ if (nfmMergeWith.coerce.explicit()) {\n+ this.coerce = nfmMergeWith.coerce;\n+ }\n+ }\n+\n+ // used to we can use a numeric field in a document that is then parsed twice!\n+ public abstract static class CustomNumericField extends Field {\n+\n+ private ThreadLocal<LegacyNumericTokenStream> tokenStream = new ThreadLocal<LegacyNumericTokenStream>() {\n+ @Override\n+ protected LegacyNumericTokenStream initialValue() {\n+ return new LegacyNumericTokenStream(fieldType().numericPrecisionStep());\n+ }\n+ };\n+\n+ private static ThreadLocal<LegacyNumericTokenStream> tokenStream4 = new ThreadLocal<LegacyNumericTokenStream>() {\n+ @Override\n+ protected LegacyNumericTokenStream initialValue() {\n+ return new LegacyNumericTokenStream(4);\n+ }\n+ };\n+\n+ private static ThreadLocal<LegacyNumericTokenStream> tokenStream8 = new ThreadLocal<LegacyNumericTokenStream>() {\n+ @Override\n+ protected LegacyNumericTokenStream initialValue() {\n+ return new LegacyNumericTokenStream(8);\n+ }\n+ };\n+\n+ private static ThreadLocal<LegacyNumericTokenStream> tokenStream16 = new ThreadLocal<LegacyNumericTokenStream>() {\n+ @Override\n+ protected LegacyNumericTokenStream initialValue() {\n+ return new LegacyNumericTokenStream(16);\n+ }\n+ };\n+\n+ private static ThreadLocal<LegacyNumericTokenStream> tokenStreamMax = new ThreadLocal<LegacyNumericTokenStream>() {\n+ @Override\n+ protected LegacyNumericTokenStream initialValue() {\n+ return new LegacyNumericTokenStream(Integer.MAX_VALUE);\n+ }\n+ };\n+\n+ public CustomNumericField(Number value, MappedFieldType fieldType) {\n+ super(fieldType.name(), fieldType);\n+ if (value != null) {\n+ this.fieldsData = value;\n+ }\n+ }\n+\n+ protected LegacyNumericTokenStream getCachedStream() {\n+ if (fieldType().numericPrecisionStep() == 4) {\n+ return tokenStream4.get();\n+ } else if (fieldType().numericPrecisionStep() == 8) {\n+ return tokenStream8.get();\n+ } else if (fieldType().numericPrecisionStep() == 16) {\n+ return tokenStream16.get();\n+ } else if (fieldType().numericPrecisionStep() == Integer.MAX_VALUE) {\n+ return tokenStreamMax.get();\n+ }\n+ return tokenStream.get();\n+ }\n+\n+ @Override\n+ public String stringValue() {\n+ return null;\n+ }\n+\n+ @Override\n+ public Reader readerValue() {\n+ return null;\n+ }\n+\n+ public abstract String numericAsString();\n+ }\n+\n+ @Override\n+ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n+ super.doXContentBody(builder, includeDefaults, params);\n+\n+ if (includeDefaults || ignoreMalformed.explicit()) {\n+ builder.field(\"ignore_malformed\", ignoreMalformed.value());\n+ }\n+ if (includeDefaults || coerce.explicit()) {\n+ builder.field(\"coerce\", coerce.value());\n+ }\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/LegacyNumberFieldMapper.java", "status": "added" }, { "diff": "@@ -0,0 +1,202 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.core;\n+\n+import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.analysis.TokenStream;\n+import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n+import org.apache.lucene.document.Field;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.common.Explicit;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.index.analysis.NamedAnalyzer;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.Mapper;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.ParseContext;\n+import org.elasticsearch.index.mapper.core.StringFieldMapper.ValueAndBoost;\n+\n+import java.io.IOException;\n+import java.util.Iterator;\n+import java.util.List;\n+import java.util.Map;\n+\n+import static org.apache.lucene.index.IndexOptions.NONE;\n+import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeIntegerValue;\n+import static org.elasticsearch.index.mapper.core.TypeParsers.parseNumberField;\n+\n+/**\n+ * A {@link FieldMapper} that takes a string and writes a count of the tokens in that string\n+ * to the index. In most ways the mapper acts just like an {@link LegacyIntegerFieldMapper}.\n+ */\n+public class LegacyTokenCountFieldMapper extends LegacyIntegerFieldMapper {\n+ public static final String CONTENT_TYPE = \"token_count\";\n+\n+ public static class Defaults extends LegacyIntegerFieldMapper.Defaults {\n+\n+ }\n+\n+ public static class Builder extends LegacyNumberFieldMapper.Builder<Builder, LegacyTokenCountFieldMapper> {\n+ private NamedAnalyzer analyzer;\n+\n+ public Builder(String name) {\n+ super(name, Defaults.FIELD_TYPE, Defaults.PRECISION_STEP_32_BIT);\n+ builder = this;\n+ }\n+\n+ public Builder analyzer(NamedAnalyzer analyzer) {\n+ this.analyzer = analyzer;\n+ return this;\n+ }\n+\n+ public NamedAnalyzer analyzer() {\n+ return analyzer;\n+ }\n+\n+ @Override\n+ public LegacyTokenCountFieldMapper build(BuilderContext context) {\n+ if (context.indexCreatedVersion().onOrAfter(Version.V_5_0_0)) {\n+ throw new IllegalStateException(\"Cannot use legacy numeric types after 5.0\");\n+ }\n+ setupFieldType(context);\n+ LegacyTokenCountFieldMapper fieldMapper = new LegacyTokenCountFieldMapper(name, fieldType, defaultFieldType,\n+ ignoreMalformed(context), coerce(context), context.indexSettings(),\n+ analyzer, multiFieldsBuilder.build(this, context), copyTo);\n+ return (LegacyTokenCountFieldMapper) fieldMapper.includeInAll(includeInAll);\n+ }\n+\n+ @Override\n+ protected int maxPrecisionStep() {\n+ return 32;\n+ }\n+ }\n+\n+ public static class TypeParser implements Mapper.TypeParser {\n+ @Override\n+ @SuppressWarnings(\"unchecked\")\n+ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n+ LegacyTokenCountFieldMapper.Builder builder = new LegacyTokenCountFieldMapper.Builder(name);\n+ for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n+ Map.Entry<String, Object> entry = iterator.next();\n+ String propName = Strings.toUnderscoreCase(entry.getKey());\n+ Object propNode = entry.getValue();\n+ if (propName.equals(\"null_value\")) {\n+ builder.nullValue(nodeIntegerValue(propNode));\n+ iterator.remove();\n+ } else if (propName.equals(\"analyzer\")) {\n+ NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString());\n+ if (analyzer == null) {\n+ throw new MapperParsingException(\"Analyzer [\" + propNode.toString() + \"] not found for field [\" + name + \"]\");\n+ }\n+ builder.analyzer(analyzer);\n+ iterator.remove();\n+ }\n+ }\n+ parseNumberField(builder, name, node, parserContext);\n+ if (builder.analyzer() == null) {\n+ throw new MapperParsingException(\"Analyzer must be set for field [\" + name + \"] but wasn't.\");\n+ }\n+ return builder;\n+ }\n+ }\n+\n+ private NamedAnalyzer analyzer;\n+\n+ protected LegacyTokenCountFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Explicit<Boolean> ignoreMalformed,\n+ Explicit<Boolean> coerce, Settings indexSettings, NamedAnalyzer analyzer, MultiFields multiFields, CopyTo copyTo) {\n+ super(simpleName, fieldType, defaultFieldType, ignoreMalformed, coerce, indexSettings, multiFields, copyTo);\n+ this.analyzer = analyzer;\n+ }\n+\n+ @Override\n+ protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n+ ValueAndBoost valueAndBoost = StringFieldMapper.parseCreateFieldForString(context, null /* Out null value is an int so we convert*/, fieldType().boost());\n+ if (valueAndBoost.value() == null && fieldType().nullValue() == null) {\n+ return;\n+ }\n+\n+ if (fieldType().indexOptions() != NONE || fieldType().stored() || fieldType().hasDocValues()) {\n+ int count;\n+ if (valueAndBoost.value() == null) {\n+ count = fieldType().nullValue();\n+ } else {\n+ count = countPositions(analyzer, simpleName(), valueAndBoost.value());\n+ }\n+ addIntegerFields(context, fields, count, valueAndBoost.boost());\n+ }\n+ }\n+\n+ /**\n+ * Count position increments in a token stream. Package private for testing.\n+ * @param analyzer analyzer to create token stream\n+ * @param fieldName field name to pass to analyzer\n+ * @param fieldValue field value to pass to analyzer\n+ * @return number of position increments in a token stream\n+ * @throws IOException if tokenStream throws it\n+ */\n+ static int countPositions(Analyzer analyzer, String fieldName, String fieldValue) throws IOException {\n+ try (TokenStream tokenStream = analyzer.tokenStream(fieldName, fieldValue)) {\n+ int count = 0;\n+ PositionIncrementAttribute position = tokenStream.addAttribute(PositionIncrementAttribute.class);\n+ tokenStream.reset();\n+ while (tokenStream.incrementToken()) {\n+ count += position.getPositionIncrement();\n+ }\n+ tokenStream.end();\n+ count += position.getPositionIncrement();\n+ return count;\n+ }\n+ }\n+\n+ /**\n+ * Name of analyzer.\n+ * @return name of analyzer\n+ */\n+ public String analyzer() {\n+ return analyzer.name();\n+ }\n+\n+ @Override\n+ protected String contentType() {\n+ return CONTENT_TYPE;\n+ }\n+\n+ @Override\n+ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n+ super.doMerge(mergeWith, updateAllTypes);\n+ this.analyzer = ((LegacyTokenCountFieldMapper) mergeWith).analyzer;\n+ }\n+\n+ @Override\n+ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n+ super.doXContentBody(builder, includeDefaults, params);\n+\n+ builder.field(\"analyzer\", analyzer());\n+ }\n+\n+ @Override\n+ public boolean isGenerated() {\n+ return true;\n+ }\n+\n+}", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/LegacyTokenCountFieldMapper.java", "status": "added" }, { "diff": "@@ -19,79 +19,70 @@\n \n package org.elasticsearch.index.mapper.core;\n \n-import org.apache.lucene.analysis.Analyzer;\n-import org.apache.lucene.analysis.LegacyNumericTokenStream;\n-import org.apache.lucene.analysis.TokenStream;\n+import org.apache.lucene.document.DoublePoint;\n import org.apache.lucene.document.Field;\n-import org.apache.lucene.document.FieldType;\n+import org.apache.lucene.document.FloatPoint;\n+import org.apache.lucene.document.IntPoint;\n+import org.apache.lucene.document.LongPoint;\n import org.apache.lucene.document.SortedNumericDocValuesField;\n-import org.apache.lucene.index.DocValuesType;\n+import org.apache.lucene.document.StoredField;\n import org.apache.lucene.index.IndexOptions;\n-import org.apache.lucene.index.IndexableField;\n-import org.apache.lucene.index.IndexableFieldType;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.PointValues;\n+import org.apache.lucene.search.BoostQuery;\n+import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n+import org.apache.lucene.util.NumericUtils;\n import org.elasticsearch.Version;\n+import org.elasticsearch.action.fieldstats.FieldStats;\n import org.elasticsearch.common.Explicit;\n-import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Setting.Property;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.index.analysis.NamedAnalyzer;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentParser.Token;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n-import org.elasticsearch.index.fielddata.IndexNumericFieldData;\n+import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType;\n import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData;\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n+import org.elasticsearch.index.mapper.core.LegacyNumberFieldMapper.Defaults;\n import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n+import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.search.DocValueFormat;\n import org.joda.time.DateTimeZone;\n \n import java.io.IOException;\n-import java.io.Reader;\n+import java.util.ArrayList;\n+import java.util.Iterator;\n import java.util.List;\n+import java.util.Map;\n+import java.util.Objects;\n+\n+/** A {@link FieldMapper} for numeric types: byte, short, int, long, float and double. */\n+public class NumberFieldMapper extends FieldMapper implements AllFieldMapper.IncludeInAll {\n \n-/**\n- *\n- */\n-public abstract class NumberFieldMapper extends FieldMapper implements AllFieldMapper.IncludeInAll {\n // this is private since it has a different default\n private static final Setting<Boolean> COERCE_SETTING =\n- Setting.boolSetting(\"index.mapping.coerce\", true, Property.IndexScope);\n-\n- public static class Defaults {\n-\n- public static final int PRECISION_STEP_8_BIT = Integer.MAX_VALUE; // 1tpv: 256 terms at most, not useful\n- public static final int PRECISION_STEP_16_BIT = 8; // 2tpv\n- public static final int PRECISION_STEP_32_BIT = 8; // 4tpv\n- public static final int PRECISION_STEP_64_BIT = 16; // 4tpv\n+ Setting.boolSetting(\"index.mapping.coerce\", true, Property.IndexScope);\n \n- public static final Explicit<Boolean> IGNORE_MALFORMED = new Explicit<>(false, false);\n- public static final Explicit<Boolean> COERCE = new Explicit<>(true, false);\n- }\n-\n- public abstract static class Builder<T extends Builder, Y extends NumberFieldMapper> extends FieldMapper.Builder<T, Y> {\n+ public static class Builder extends FieldMapper.Builder<Builder, NumberFieldMapper> {\n \n private Boolean ignoreMalformed;\n-\n private Boolean coerce;\n \n- public Builder(String name, MappedFieldType fieldType, int defaultPrecisionStep) {\n- super(name, fieldType, fieldType);\n- this.fieldType.setNumericPrecisionStep(defaultPrecisionStep);\n+ public Builder(String name, NumberType type) {\n+ super(name, new NumberFieldType(type), new NumberFieldType(type));\n+ builder = this;\n }\n \n- public T precisionStep(int precisionStep) {\n- fieldType.setNumericPrecisionStep(precisionStep);\n- return builder;\n- }\n-\n- public T ignoreMalformed(boolean ignoreMalformed) {\n+ public Builder ignoreMalformed(boolean ignoreMalformed) {\n this.ignoreMalformed = ignoreMalformed;\n return builder;\n }\n@@ -106,7 +97,7 @@ protected Explicit<Boolean> ignoreMalformed(BuilderContext context) {\n return Defaults.IGNORE_MALFORMED;\n }\n \n- public T coerce(boolean coerce) {\n+ public Builder coerce(boolean coerce) {\n this.coerce = coerce;\n return builder;\n }\n@@ -121,49 +112,658 @@ protected Explicit<Boolean> coerce(BuilderContext context) {\n return Defaults.COERCE;\n }\n \n+ @Override\n protected void setupFieldType(BuilderContext context) {\n super.setupFieldType(context);\n- int precisionStep = fieldType.numericPrecisionStep();\n- if (precisionStep <= 0 || precisionStep >= maxPrecisionStep()) {\n- fieldType.setNumericPrecisionStep(Integer.MAX_VALUE);\n+ }\n+\n+ @Override\n+ public NumberFieldMapper build(BuilderContext context) {\n+ setupFieldType(context);\n+ NumberFieldMapper fieldMapper = new NumberFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context),\n+ coerce(context), context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo);\n+ return (NumberFieldMapper) fieldMapper.includeInAll(includeInAll);\n+ }\n+ }\n+\n+ public static class TypeParser implements Mapper.TypeParser {\n+\n+ final NumberType type;\n+\n+ public TypeParser(NumberType type) {\n+ this.type = type;\n+ }\n+\n+ @Override\n+ public Mapper.Builder<?,?> parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n+ if (parserContext.indexVersionCreated().before(Version.V_5_0_0)) {\n+ switch (type) {\n+ case BYTE:\n+ return new LegacyByteFieldMapper.TypeParser().parse(name, node, parserContext);\n+ case SHORT:\n+ return new LegacyShortFieldMapper.TypeParser().parse(name, node, parserContext);\n+ case INTEGER:\n+ return new LegacyIntegerFieldMapper.TypeParser().parse(name, node, parserContext);\n+ case LONG:\n+ return new LegacyLongFieldMapper.TypeParser().parse(name, node, parserContext);\n+ case FLOAT:\n+ return new LegacyFloatFieldMapper.TypeParser().parse(name, node, parserContext);\n+ case DOUBLE:\n+ return new LegacyDoubleFieldMapper.TypeParser().parse(name, node, parserContext);\n+ default:\n+ throw new AssertionError();\n+ }\n+ }\n+ Builder builder = new Builder(name, type);\n+ TypeParsers.parseField(builder, name, node, parserContext);\n+ for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n+ Map.Entry<String, Object> entry = iterator.next();\n+ String propName = entry.getKey();\n+ Object propNode = entry.getValue();\n+ if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n+ builder.nullValue(type.parse(propNode));\n+ iterator.remove();\n+ } else if (propName.equals(\"ignore_malformed\")) {\n+ builder.ignoreMalformed(TypeParsers.nodeBooleanValue(\"ignore_malformed\", propNode, parserContext));\n+ iterator.remove();\n+ } else if (propName.equals(\"coerce\")) {\n+ builder.coerce(TypeParsers.nodeBooleanValue(\"coerce\", propNode, parserContext));\n+ iterator.remove();\n+ }\n }\n+ return builder;\n }\n+ }\n+\n+ public enum NumberType {\n+ FLOAT(\"float\", NumericType.FLOAT) {\n+ @Override\n+ Float parse(Object value) {\n+ if (value instanceof Number) {\n+ return ((Number) value).floatValue();\n+ }\n+ if (value instanceof BytesRef) {\n+ value = ((BytesRef) value).utf8ToString();\n+ }\n+ return Float.parseFloat(value.toString());\n+ }\n+\n+ @Override\n+ Float parse(XContentParser parser, boolean coerce) throws IOException {\n+ return parser.floatValue(coerce);\n+ }\n+\n+ @Override\n+ Query termQuery(String field, Object value) {\n+ float v = parse(value);\n+ return FloatPoint.newExactQuery(field, v);\n+ }\n+\n+ @Override\n+ Query termsQuery(String field, List<Object> values) {\n+ float[] v = new float[values.size()];\n+ for (int i = 0; i < values.size(); ++i) {\n+ v[i] = parse(values.get(i));\n+ }\n+ return FloatPoint.newSetQuery(field, v);\n+ }\n+\n+ @Override\n+ Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n+ float l = Float.NEGATIVE_INFINITY;\n+ float u = Float.POSITIVE_INFINITY;\n+ if (lowerTerm != null) {\n+ l = parse(lowerTerm);\n+ if (includeLower == false) {\n+ l = Math.nextUp(l);\n+ }\n+ }\n+ if (upperTerm != null) {\n+ u = parse(upperTerm);\n+ if (includeUpper == false) {\n+ u = Math.nextDown(u);\n+ }\n+ }\n+ return FloatPoint.newRangeQuery(field, l, u);\n+ }\n+\n+ @Override\n+ Query fuzzyQuery(String field, Object value, Fuzziness fuzziness) {\n+ float base = parse(value);\n+ float delta = fuzziness.asFloat();\n+ return rangeQuery(field, base - delta, base + delta, true, true);\n+ }\n+\n+ @Override\n+ public List<Field> createFields(String name, Number value, boolean indexed, boolean docValued, boolean stored) {\n+ List<Field> fields = new ArrayList<>();\n+ if (indexed) {\n+ fields.add(new FloatPoint(name, value.floatValue()));\n+ }\n+ if (docValued) {\n+ fields.add(new SortedNumericDocValuesField(name, NumericUtils.floatToSortableInt(value.floatValue())));\n+ }\n+ if (stored) {\n+ fields.add(new StoredField(name, value.floatValue()));\n+ }\n+ return fields;\n+ }\n \n- protected abstract int maxPrecisionStep();\n+ @Override\n+ FieldStats.Double stats(IndexReader reader, String field) throws IOException {\n+ long size = PointValues.size(reader, field);\n+ if (size == 0) {\n+ return null;\n+ }\n+ int docCount = PointValues.getDocCount(reader, field);\n+ byte[] min = PointValues.getMinPackedValue(reader, field);\n+ byte[] max = PointValues.getMaxPackedValue(reader, field);\n+ return new FieldStats.Double(reader.maxDoc(),docCount, -1L, size,\n+ FloatPoint.decodeDimension(min, 0),\n+ FloatPoint.decodeDimension(max, 0));\n+ }\n+ },\n+ DOUBLE(\"double\", NumericType.DOUBLE) {\n+ @Override\n+ Double parse(Object value) {\n+ if (value instanceof Number) {\n+ return ((Number) value).doubleValue();\n+ }\n+ if (value instanceof BytesRef) {\n+ value = ((BytesRef) value).utf8ToString();\n+ }\n+ return Double.parseDouble(value.toString());\n+ }\n+\n+ @Override\n+ Double parse(XContentParser parser, boolean coerce) throws IOException {\n+ return parser.doubleValue(coerce);\n+ }\n+\n+ @Override\n+ Query termQuery(String field, Object value) {\n+ double v = parse(value);\n+ return DoublePoint.newExactQuery(field, v);\n+ }\n+\n+ @Override\n+ Query termsQuery(String field, List<Object> values) {\n+ double[] v = new double[values.size()];\n+ for (int i = 0; i < values.size(); ++i) {\n+ v[i] = parse(values.get(i));\n+ }\n+ return DoublePoint.newSetQuery(field, v);\n+ }\n+\n+ @Override\n+ Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n+ double l = Double.NEGATIVE_INFINITY;\n+ double u = Double.POSITIVE_INFINITY;\n+ if (lowerTerm != null) {\n+ l = parse(lowerTerm);\n+ if (includeLower == false) {\n+ l = Math.nextUp(l);\n+ }\n+ }\n+ if (upperTerm != null) {\n+ u = parse(upperTerm);\n+ if (includeUpper == false) {\n+ u = Math.nextDown(u);\n+ }\n+ }\n+ return DoublePoint.newRangeQuery(field, l, u);\n+ }\n+\n+ @Override\n+ Query fuzzyQuery(String field, Object value, Fuzziness fuzziness) {\n+ double base = parse(value);\n+ double delta = fuzziness.asFloat();\n+ return rangeQuery(field, base - delta, base + delta, true, true);\n+ }\n+\n+ @Override\n+ public List<Field> createFields(String name, Number value, boolean indexed, boolean docValued, boolean stored) {\n+ List<Field> fields = new ArrayList<>();\n+ if (indexed) {\n+ fields.add(new DoublePoint(name, value.doubleValue()));\n+ }\n+ if (docValued) {\n+ fields.add(new SortedNumericDocValuesField(name, NumericUtils.doubleToSortableLong(value.doubleValue())));\n+ }\n+ if (stored) {\n+ fields.add(new StoredField(name, value.doubleValue()));\n+ }\n+ return fields;\n+ }\n+\n+ @Override\n+ FieldStats.Double stats(IndexReader reader, String field) throws IOException {\n+ long size = PointValues.size(reader, field);\n+ if (size == 0) {\n+ return null;\n+ }\n+ int docCount = PointValues.getDocCount(reader, field);\n+ byte[] min = PointValues.getMinPackedValue(reader, field);\n+ byte[] max = PointValues.getMaxPackedValue(reader, field);\n+ return new FieldStats.Double(reader.maxDoc(),docCount, -1L, size,\n+ DoublePoint.decodeDimension(min, 0),\n+ DoublePoint.decodeDimension(max, 0));\n+ }\n+ },\n+ BYTE(\"byte\", NumericType.BYTE) {\n+ @Override\n+ Byte parse(Object value) {\n+ if (value instanceof Byte) {\n+ return (Byte) value;\n+ }\n+ if (value instanceof BytesRef) {\n+ value = ((BytesRef) value).utf8ToString();\n+ }\n+ return Byte.parseByte(value.toString());\n+ }\n+\n+ @Override\n+ Short parse(XContentParser parser, boolean coerce) throws IOException {\n+ int value = parser.intValue(coerce);\n+ if (value < Byte.MIN_VALUE || value > Byte.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a byte\");\n+ }\n+ return (short) value;\n+ }\n+\n+ @Override\n+ Query termQuery(String field, Object value) {\n+ return INTEGER.termQuery(field, value);\n+ }\n+\n+ @Override\n+ Query termsQuery(String field, List<Object> values) {\n+ return INTEGER.termsQuery(field, values);\n+ }\n+\n+ @Override\n+ Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n+ return INTEGER.rangeQuery(field, lowerTerm, upperTerm, includeLower, includeUpper);\n+ }\n+\n+ @Override\n+ Query fuzzyQuery(String field, Object value, Fuzziness fuzziness) {\n+ return INTEGER.fuzzyQuery(field, value, fuzziness);\n+ }\n+\n+ @Override\n+ public List<Field> createFields(String name, Number value, boolean indexed, boolean docValued, boolean stored) {\n+ return INTEGER.createFields(name, value, indexed, docValued, stored);\n+ }\n+\n+ @Override\n+ FieldStats.Long stats(IndexReader reader, String field) throws IOException {\n+ return (FieldStats.Long) INTEGER.stats(reader, field);\n+ }\n+\n+ @Override\n+ Number valueForSearch(Number value) {\n+ return value.byteValue();\n+ }\n+ },\n+ SHORT(\"short\", NumericType.SHORT) {\n+ @Override\n+ Short parse(Object value) {\n+ if (value instanceof Number) {\n+ return ((Number) value).shortValue();\n+ }\n+ if (value instanceof BytesRef) {\n+ value = ((BytesRef) value).utf8ToString();\n+ }\n+ return Short.parseShort(value.toString());\n+ }\n+\n+ @Override\n+ Short parse(XContentParser parser, boolean coerce) throws IOException {\n+ int value = parser.intValue(coerce);\n+ if (value < Short.MIN_VALUE || value > Short.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a short\");\n+ }\n+ return (short) value;\n+ }\n+\n+ @Override\n+ Query termQuery(String field, Object value) {\n+ return INTEGER.termQuery(field, value);\n+ }\n+\n+ @Override\n+ Query termsQuery(String field, List<Object> values) {\n+ return INTEGER.termsQuery(field, values);\n+ }\n+\n+ @Override\n+ Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n+ return INTEGER.rangeQuery(field, lowerTerm, upperTerm, includeLower, includeUpper);\n+ }\n+\n+ @Override\n+ Query fuzzyQuery(String field, Object value, Fuzziness fuzziness) {\n+ return INTEGER.fuzzyQuery(field, value, fuzziness);\n+ }\n+\n+ @Override\n+ public List<Field> createFields(String name, Number value, boolean indexed, boolean docValued, boolean stored) {\n+ return INTEGER.createFields(name, value, indexed, docValued, stored);\n+ }\n+\n+ @Override\n+ FieldStats.Long stats(IndexReader reader, String field) throws IOException {\n+ return (FieldStats.Long) INTEGER.stats(reader, field);\n+ }\n+\n+ @Override\n+ Number valueForSearch(Number value) {\n+ return value.shortValue();\n+ }\n+ },\n+ INTEGER(\"integer\", NumericType.INT) {\n+ @Override\n+ Integer parse(Object value) {\n+ if (value instanceof Number) {\n+ return ((Number) value).intValue();\n+ }\n+ if (value instanceof BytesRef) {\n+ value = ((BytesRef) value).utf8ToString();\n+ }\n+ return Integer.parseInt(value.toString());\n+ }\n+\n+ @Override\n+ Integer parse(XContentParser parser, boolean coerce) throws IOException {\n+ return parser.intValue(coerce);\n+ }\n+\n+ @Override\n+ Query termQuery(String field, Object value) {\n+ int v = parse(value);\n+ return IntPoint.newExactQuery(field, v);\n+ }\n+\n+ @Override\n+ Query termsQuery(String field, List<Object> values) {\n+ int[] v = new int[values.size()];\n+ for (int i = 0; i < values.size(); ++i) {\n+ v[i] = parse(values.get(i));\n+ }\n+ return IntPoint.newSetQuery(field, v);\n+ }\n+\n+ @Override\n+ Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n+ int l = Integer.MIN_VALUE;\n+ int u = Integer.MAX_VALUE;\n+ if (lowerTerm != null) {\n+ l = parse(lowerTerm);\n+ if (includeLower == false) {\n+ if (l == Integer.MAX_VALUE) {\n+ return new MatchNoDocsQuery();\n+ }\n+ ++l;\n+ }\n+ }\n+ if (upperTerm != null) {\n+ u = parse(upperTerm);\n+ if (includeUpper == false) {\n+ if (u == Integer.MIN_VALUE) {\n+ return new MatchNoDocsQuery();\n+ }\n+ --u;\n+ }\n+ }\n+ return IntPoint.newRangeQuery(field, l, u);\n+ }\n+\n+ @Override\n+ Query fuzzyQuery(String field, Object value, Fuzziness fuzziness) {\n+ int base = parse(value);\n+ int delta = fuzziness.asInt();\n+ return rangeQuery(field, base - delta, base + delta, true, true);\n+ }\n+\n+ @Override\n+ public List<Field> createFields(String name, Number value, boolean indexed, boolean docValued, boolean stored) {\n+ List<Field> fields = new ArrayList<>();\n+ if (indexed) {\n+ fields.add(new IntPoint(name, value.intValue()));\n+ }\n+ if (docValued) {\n+ fields.add(new SortedNumericDocValuesField(name, value.intValue()));\n+ }\n+ if (stored) {\n+ fields.add(new StoredField(name, value.intValue()));\n+ }\n+ return fields;\n+ }\n+\n+ @Override\n+ FieldStats.Long stats(IndexReader reader, String field) throws IOException {\n+ long size = PointValues.size(reader, field);\n+ if (size == 0) {\n+ return null;\n+ }\n+ int docCount = PointValues.getDocCount(reader, field);\n+ byte[] min = PointValues.getMinPackedValue(reader, field);\n+ byte[] max = PointValues.getMaxPackedValue(reader, field);\n+ return new FieldStats.Long(reader.maxDoc(),docCount, -1L, size,\n+ IntPoint.decodeDimension(min, 0),\n+ IntPoint.decodeDimension(max, 0));\n+ }\n+ },\n+ LONG(\"long\", NumericType.LONG) {\n+ @Override\n+ Long parse(Object value) {\n+ if (value instanceof Number) {\n+ return ((Number) value).longValue();\n+ }\n+ if (value instanceof BytesRef) {\n+ value = ((BytesRef) value).utf8ToString();\n+ }\n+ return Long.parseLong(value.toString());\n+ }\n+\n+ @Override\n+ Long parse(XContentParser parser, boolean coerce) throws IOException {\n+ return parser.longValue(coerce);\n+ }\n+\n+ @Override\n+ Query termQuery(String field, Object value) {\n+ long v = parse(value);\n+ return LongPoint.newExactQuery(field, v);\n+ }\n+\n+ @Override\n+ Query termsQuery(String field, List<Object> values) {\n+ long[] v = new long[values.size()];\n+ for (int i = 0; i < values.size(); ++i) {\n+ v[i] = parse(values.get(i));\n+ }\n+ return LongPoint.newSetQuery(field, v);\n+ }\n+\n+ @Override\n+ Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n+ long l = Long.MIN_VALUE;\n+ long u = Long.MAX_VALUE;\n+ if (lowerTerm != null) {\n+ l = parse(lowerTerm);\n+ if (includeLower == false) {\n+ if (l == Long.MAX_VALUE) {\n+ return new MatchNoDocsQuery();\n+ }\n+ ++l;\n+ }\n+ }\n+ if (upperTerm != null) {\n+ u = parse(upperTerm);\n+ if (includeUpper == false) {\n+ if (u == Long.MIN_VALUE) {\n+ return new MatchNoDocsQuery();\n+ }\n+ --u;\n+ }\n+ }\n+ return LongPoint.newRangeQuery(field, l, u);\n+ }\n+\n+ @Override\n+ Query fuzzyQuery(String field, Object value, Fuzziness fuzziness) {\n+ long base = parse(value);\n+ long delta = fuzziness.asLong();\n+ return rangeQuery(field, base - delta, base + delta, true, true);\n+ }\n+\n+ @Override\n+ public List<Field> createFields(String name, Number value, boolean indexed, boolean docValued, boolean stored) {\n+ List<Field> fields = new ArrayList<>();\n+ if (indexed) {\n+ fields.add(new LongPoint(name, value.longValue()));\n+ }\n+ if (docValued) {\n+ fields.add(new SortedNumericDocValuesField(name, value.longValue()));\n+ }\n+ if (stored) {\n+ fields.add(new StoredField(name, value.longValue()));\n+ }\n+ return fields;\n+ }\n+\n+ @Override\n+ FieldStats.Long stats(IndexReader reader, String field) throws IOException {\n+ long size = PointValues.size(reader, field);\n+ if (size == 0) {\n+ return null;\n+ }\n+ int docCount = PointValues.getDocCount(reader, field);\n+ byte[] min = PointValues.getMinPackedValue(reader, field);\n+ byte[] max = PointValues.getMaxPackedValue(reader, field);\n+ return new FieldStats.Long(reader.maxDoc(),docCount, -1L, size,\n+ LongPoint.decodeDimension(min, 0),\n+ LongPoint.decodeDimension(max, 0));\n+ }\n+ };\n+\n+ private final String name;\n+ private final NumericType numericType;\n+\n+ NumberType(String name, NumericType numericType) {\n+ this.name = name;\n+ this.numericType = numericType;\n+ }\n+\n+ /** Get the associated type name. */\n+ public final String typeName() {\n+ return name;\n+ }\n+ /** Get the associated numerit type */\n+ final NumericType numericType() {\n+ return numericType;\n+ }\n+ abstract Query termQuery(String field, Object value);\n+ abstract Query termsQuery(String field, List<Object> values);\n+ abstract Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper);\n+ abstract Query fuzzyQuery(String field, Object value, Fuzziness fuzziness);\n+ abstract Number parse(XContentParser parser, boolean coerce) throws IOException;\n+ abstract Number parse(Object value);\n+ public abstract List<Field> createFields(String name, Number value, boolean indexed, boolean docValued, boolean stored);\n+ abstract FieldStats<? extends Number> stats(IndexReader reader, String field) throws IOException;\n+ Number valueForSearch(Number value) {\n+ return value;\n+ }\n }\n \n- public static abstract class NumberFieldType extends MappedFieldType {\n+ public static final class NumberFieldType extends MappedFieldType {\n \n- public NumberFieldType(LegacyNumericType numericType) {\n+ NumberType type;\n+\n+ public NumberFieldType(NumberType type) {\n+ super();\n+ this.type = Objects.requireNonNull(type);\n setTokenized(false);\n+ setHasDocValues(true);\n setOmitNorms(true);\n- setIndexOptions(IndexOptions.DOCS);\n- setStoreTermVectors(false);\n- setNumericType(numericType);\n }\n \n- protected NumberFieldType(NumberFieldType ref) {\n- super(ref);\n+ NumberFieldType(NumberFieldType other) {\n+ super(other);\n+ this.type = other.type;\n+ }\n+\n+ @Override\n+ public MappedFieldType clone() {\n+ return new NumberFieldType(this);\n }\n \n @Override\n- public void checkCompatibility(MappedFieldType other,\n- List<String> conflicts, boolean strict) {\n- super.checkCompatibility(other, conflicts, strict);\n- if (numericPrecisionStep() != other.numericPrecisionStep()) {\n- conflicts.add(\"mapper [\" + name() + \"] has different [precision_step] values\");\n+ public String typeName() {\n+ return type.name;\n+ }\n+\n+ @Override\n+ public Query termQuery(Object value, QueryShardContext context) {\n+ Query query = type.termQuery(name(), value);\n+ if (boost() != 1f) {\n+ query = new BoostQuery(query, boost());\n+ }\n+ return query;\n+ }\n+\n+ @Override\n+ public Query termsQuery(List values, QueryShardContext context) {\n+ Query query = type.termsQuery(name(), values);\n+ if (boost() != 1f) {\n+ query = new BoostQuery(query, boost());\n+ }\n+ return query;\n+ }\n+\n+ @Override\n+ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n+ Query query = type.rangeQuery(name(), lowerTerm, upperTerm, includeLower, includeUpper);\n+ if (boost() != 1f) {\n+ query = new BoostQuery(query, boost());\n }\n+ return query;\n+ }\n+\n+ @Override\n+ public Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions) {\n+ return type.fuzzyQuery(name(), value, fuzziness);\n }\n \n- public abstract NumberFieldType clone();\n+ @Override\n+ public FieldStats stats(IndexReader reader) throws IOException {\n+ return type.stats(reader, name());\n+ }\n+\n+ @Override\n+ public IndexFieldData.Builder fielddataBuilder() {\n+ failIfNoDocValues();\n+ return new DocValuesIndexFieldData.Builder().numericType(type.numericType());\n+ }\n \n @Override\n- public abstract Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions);\n+ public Object valueForSearch(Object value) {\n+ if (value == null) {\n+ return null;\n+ }\n+ return type.valueForSearch((Number) value);\n+ }\n \n @Override\n- public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZone) {\n+ public DocValueFormat docValueFormat(String format, DateTimeZone timeZone) {\n if (timeZone != null) {\n- throw new IllegalArgumentException(\"Field [\" + name() + \"] of type [\" + typeName() + \"] does not support custom time zones\");\n+ throw new IllegalArgumentException(\"Field [\" + name() + \"] of type [\" + typeName()\n+ + \"] does not support custom time zones\");\n }\n if (format == null) {\n return DocValueFormat.RAW;\n@@ -173,21 +773,36 @@ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZ\n }\n }\n \n- protected Boolean includeInAll;\n+ private Boolean includeInAll;\n \n- protected Explicit<Boolean> ignoreMalformed;\n+ private Explicit<Boolean> ignoreMalformed;\n \n- protected Explicit<Boolean> coerce;\n+ private Explicit<Boolean> coerce;\n \n- protected NumberFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType,\n- Explicit<Boolean> ignoreMalformed, Explicit<Boolean> coerce, Settings indexSettings,\n- MultiFields multiFields, CopyTo copyTo) {\n+ private NumberFieldMapper(\n+ String simpleName,\n+ MappedFieldType fieldType,\n+ MappedFieldType defaultFieldType,\n+ Explicit<Boolean> ignoreMalformed,\n+ Explicit<Boolean> coerce,\n+ Settings indexSettings,\n+ MultiFields multiFields,\n+ CopyTo copyTo) {\n super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo);\n- assert fieldType.tokenized() == false;\n this.ignoreMalformed = ignoreMalformed;\n this.coerce = coerce;\n }\n \n+ @Override\n+ public NumberFieldType fieldType() {\n+ return (NumberFieldType) super.fieldType();\n+ }\n+\n+ @Override\n+ protected String contentType() {\n+ return fieldType.typeName();\n+ }\n+\n @Override\n protected NumberFieldMapper clone() {\n return (NumberFieldMapper) super.clone();\n@@ -228,192 +843,67 @@ public Mapper unsetIncludeInAll() {\n \n @Override\n protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n- RuntimeException e = null;\n- try {\n- innerParseCreateField(context, fields);\n- } catch (IllegalArgumentException e1) {\n- e = e1;\n- } catch (MapperParsingException e2) {\n- e = e2;\n- }\n-\n- if (e != null && !ignoreMalformed.value()) {\n- throw e;\n+ XContentParser parser = context.parser();\n+ Object value;\n+ Number numericValue = null;\n+ if (context.externalValueSet()) {\n+ value = context.externalValue();\n+ } else if (parser.currentToken() == Token.VALUE_NULL) {\n+ value = null;\n+ } else if (coerce.value()\n+ && parser.currentToken() == Token.VALUE_STRING\n+ && parser.textLength() == 0) {\n+ value = null;\n+ } else {\n+ value = parser.textOrNull();\n+ if (value != null) {\n+ try {\n+ numericValue = fieldType().type.parse(parser, coerce.value());\n+ } catch (IllegalArgumentException e) {\n+ if (ignoreMalformed.value()) {\n+ return;\n+ } else {\n+ throw e;\n+ }\n+ }\n+ }\n }\n- }\n-\n- protected abstract void innerParseCreateField(ParseContext context, List<Field> fields) throws IOException;\n-\n- protected final void addDocValue(ParseContext context, List<Field> fields, long value) {\n- fields.add(new SortedNumericDocValuesField(fieldType().name(), value));\n- }\n \n- /**\n- * Converts an object value into a double\n- */\n- public static double parseDoubleValue(Object value) {\n- if (value instanceof Number) {\n- return ((Number) value).doubleValue();\n+ if (value == null) {\n+ value = fieldType().nullValue();\n }\n \n- if (value instanceof BytesRef) {\n- return Double.parseDouble(((BytesRef) value).utf8ToString());\n+ if (value == null) {\n+ return;\n }\n \n- return Double.parseDouble(value.toString());\n- }\n-\n- /**\n- * Converts an object value into a long\n- */\n- public static long parseLongValue(Object value) {\n- if (value instanceof Number) {\n- return ((Number) value).longValue();\n+ if (numericValue == null) {\n+ numericValue = fieldType().type.parse(value);\n }\n \n- if (value instanceof BytesRef) {\n- return Long.parseLong(((BytesRef) value).utf8ToString());\n+ if (context.includeInAll(includeInAll, this)) {\n+ context.allEntries().addText(fieldType().name(), value.toString(), fieldType().boost());\n }\n \n- return Long.parseLong(value.toString());\n+ boolean indexed = fieldType().indexOptions() != IndexOptions.NONE;\n+ boolean docValued = fieldType().hasDocValues();\n+ boolean stored = fieldType().stored();\n+ fields.addAll(fieldType().type.createFields(fieldType().name(), numericValue, indexed, docValued, stored));\n }\n \n @Override\n protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n super.doMerge(mergeWith, updateAllTypes);\n- NumberFieldMapper nfmMergeWith = (NumberFieldMapper) mergeWith;\n-\n- this.includeInAll = nfmMergeWith.includeInAll;\n- if (nfmMergeWith.ignoreMalformed.explicit()) {\n- this.ignoreMalformed = nfmMergeWith.ignoreMalformed;\n+ NumberFieldMapper other = (NumberFieldMapper) mergeWith;\n+ this.includeInAll = other.includeInAll;\n+ if (other.ignoreMalformed.explicit()) {\n+ this.ignoreMalformed = other.ignoreMalformed;\n }\n- if (nfmMergeWith.coerce.explicit()) {\n- this.coerce = nfmMergeWith.coerce;\n+ if (other.coerce.explicit()) {\n+ this.coerce = other.coerce;\n }\n }\n \n- // used to we can use a numeric field in a document that is then parsed twice!\n- public abstract static class CustomNumericField extends Field {\n-\n- private ThreadLocal<LegacyNumericTokenStream> tokenStream = new ThreadLocal<LegacyNumericTokenStream>() {\n- @Override\n- protected LegacyNumericTokenStream initialValue() {\n- return new LegacyNumericTokenStream(fieldType().numericPrecisionStep());\n- }\n- };\n-\n- private static ThreadLocal<LegacyNumericTokenStream> tokenStream4 = new ThreadLocal<LegacyNumericTokenStream>() {\n- @Override\n- protected LegacyNumericTokenStream initialValue() {\n- return new LegacyNumericTokenStream(4);\n- }\n- };\n-\n- private static ThreadLocal<LegacyNumericTokenStream> tokenStream8 = new ThreadLocal<LegacyNumericTokenStream>() {\n- @Override\n- protected LegacyNumericTokenStream initialValue() {\n- return new LegacyNumericTokenStream(8);\n- }\n- };\n-\n- private static ThreadLocal<LegacyNumericTokenStream> tokenStream16 = new ThreadLocal<LegacyNumericTokenStream>() {\n- @Override\n- protected LegacyNumericTokenStream initialValue() {\n- return new LegacyNumericTokenStream(16);\n- }\n- };\n-\n- private static ThreadLocal<LegacyNumericTokenStream> tokenStreamMax = new ThreadLocal<LegacyNumericTokenStream>() {\n- @Override\n- protected LegacyNumericTokenStream initialValue() {\n- return new LegacyNumericTokenStream(Integer.MAX_VALUE);\n- }\n- };\n-\n- public CustomNumericField(Number value, MappedFieldType fieldType) {\n- super(fieldType.name(), fieldType);\n- if (value != null) {\n- this.fieldsData = value;\n- }\n- }\n-\n- protected LegacyNumericTokenStream getCachedStream() {\n- if (fieldType().numericPrecisionStep() == 4) {\n- return tokenStream4.get();\n- } else if (fieldType().numericPrecisionStep() == 8) {\n- return tokenStream8.get();\n- } else if (fieldType().numericPrecisionStep() == 16) {\n- return tokenStream16.get();\n- } else if (fieldType().numericPrecisionStep() == Integer.MAX_VALUE) {\n- return tokenStreamMax.get();\n- }\n- return tokenStream.get();\n- }\n-\n- @Override\n- public String stringValue() {\n- return null;\n- }\n-\n- @Override\n- public Reader readerValue() {\n- return null;\n- }\n-\n- public abstract String numericAsString();\n- }\n-\n- public static abstract class CustomNumericDocValuesField implements IndexableField {\n-\n- public static final FieldType TYPE = new FieldType();\n- static {\n- TYPE.setDocValuesType(DocValuesType.BINARY);\n- TYPE.freeze();\n- }\n-\n- private final String name;\n-\n- public CustomNumericDocValuesField(String name) {\n- this.name = name;\n- }\n-\n- @Override\n- public String name() {\n- return name;\n- }\n-\n- @Override\n- public IndexableFieldType fieldType() {\n- return TYPE;\n- }\n-\n- @Override\n- public float boost() {\n- return 1f;\n- }\n-\n- @Override\n- public String stringValue() {\n- return null;\n- }\n-\n- @Override\n- public Reader readerValue() {\n- return null;\n- }\n-\n- @Override\n- public Number numericValue() {\n- return null;\n- }\n-\n- @Override\n- public TokenStream tokenStream(Analyzer analyzer, TokenStream reuse) {\n- return null;\n- }\n-\n- }\n-\n @Override\n protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n super.doXContentBody(builder, includeDefaults, params);\n@@ -424,5 +914,10 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n if (includeDefaults || coerce.explicit()) {\n builder.field(\"coerce\", coerce.value());\n }\n+ if (includeInAll != null) {\n+ builder.field(\"include_in_all\", includeInAll);\n+ } else if (includeDefaults) {\n+ builder.field(\"include_in_all\", false);\n+ }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/NumberFieldMapper.java", "status": "modified" }, { "diff": "@@ -22,9 +22,13 @@\n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.SortedSetDocValuesField;\n import org.apache.lucene.index.IndexOptions;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.MultiTermQuery;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.RegexpQuery;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.Version;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.logging.DeprecationLogger;\n import org.elasticsearch.common.logging.ESLogger;\n@@ -44,6 +48,7 @@\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n+import org.elasticsearch.index.query.QueryShardContext;\n \n import java.io.IOException;\n import java.util.Arrays;\n@@ -55,7 +60,6 @@\n import java.util.Set;\n \n import static org.apache.lucene.index.IndexOptions.NONE;\n-import static org.elasticsearch.index.mapper.core.TypeParsers.parseMultiField;\n import static org.elasticsearch.index.mapper.core.TypeParsers.parseTextField;\n \n public class StringFieldMapper extends FieldMapper implements AllFieldMapper.IncludeInAll {\n@@ -470,6 +474,15 @@ public IndexFieldData.Builder fielddataBuilder() {\n + \"use significant memory.\");\n }\n }\n+\n+ @Override\n+ public Query regexpQuery(String value, int flags, int maxDeterminizedStates, @Nullable MultiTermQuery.RewriteMethod method, @Nullable QueryShardContext context) {\n+ RegexpQuery query = new RegexpQuery(new Term(name(), indexedValueForSearch(value)), flags, maxDeterminizedStates);\n+ if (method != null) {\n+ query.setRewriteMethod(method);\n+ }\n+ return query;\n+ }\n }\n \n private Boolean includeInAll;", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java", "status": "modified" }, { "diff": "@@ -21,7 +21,11 @@\n \n import org.apache.lucene.document.Field;\n import org.apache.lucene.index.IndexOptions;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.MultiTermQuery;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.RegexpQuery;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -36,6 +40,7 @@\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n+import org.elasticsearch.index.query.QueryShardContext;\n \n import java.io.IOException;\n import java.util.Iterator;\n@@ -295,6 +300,16 @@ public IndexFieldData.Builder fielddataBuilder() {\n }\n return new PagedBytesIndexFieldData.Builder(fielddataMinFrequency, fielddataMaxFrequency, fielddataMinSegmentSize);\n }\n+\n+ @Override\n+ public Query regexpQuery(String value, int flags, int maxDeterminizedStates,\n+ @Nullable MultiTermQuery.RewriteMethod method, @Nullable QueryShardContext context) {\n+ RegexpQuery query = new RegexpQuery(new Term(name(), indexedValueForSearch(value)), flags, maxDeterminizedStates);\n+ if (method != null) {\n+ query.setRewriteMethod(method);\n+ }\n+ return query;\n+ }\n }\n \n private Boolean includeInAll;", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/TextFieldMapper.java", "status": "modified" }, { "diff": "@@ -23,7 +23,8 @@\n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n import org.apache.lucene.document.Field;\n-import org.elasticsearch.common.Explicit;\n+import org.apache.lucene.index.IndexOptions;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -33,33 +34,31 @@\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.index.mapper.core.StringFieldMapper.ValueAndBoost;\n \n import java.io.IOException;\n import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n \n-import static org.apache.lucene.index.IndexOptions.NONE;\n import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeIntegerValue;\n-import static org.elasticsearch.index.mapper.core.TypeParsers.parseNumberField;\n+import static org.elasticsearch.index.mapper.core.TypeParsers.parseField;\n \n /**\n * A {@link FieldMapper} that takes a string and writes a count of the tokens in that string\n- * to the index. In most ways the mapper acts just like an {@link IntegerFieldMapper}.\n+ * to the index. In most ways the mapper acts just like an {@link LegacyIntegerFieldMapper}.\n */\n-public class TokenCountFieldMapper extends IntegerFieldMapper {\n+public class TokenCountFieldMapper extends FieldMapper {\n public static final String CONTENT_TYPE = \"token_count\";\n \n- public static class Defaults extends IntegerFieldMapper.Defaults {\n-\n+ public static class Defaults {\n+ public static final MappedFieldType FIELD_TYPE = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.INTEGER);\n }\n \n- public static class Builder extends NumberFieldMapper.Builder<Builder, TokenCountFieldMapper> {\n+ public static class Builder extends FieldMapper.Builder<Builder, TokenCountFieldMapper> {\n private NamedAnalyzer analyzer;\n \n public Builder(String name) {\n- super(name, Defaults.FIELD_TYPE, Defaults.PRECISION_STEP_32_BIT);\n+ super(name, Defaults.FIELD_TYPE, Defaults.FIELD_TYPE);\n builder = this;\n }\n \n@@ -75,22 +74,18 @@ public NamedAnalyzer analyzer() {\n @Override\n public TokenCountFieldMapper build(BuilderContext context) {\n setupFieldType(context);\n- TokenCountFieldMapper fieldMapper = new TokenCountFieldMapper(name, fieldType, defaultFieldType,\n- ignoreMalformed(context), coerce(context), context.indexSettings(),\n- analyzer, multiFieldsBuilder.build(this, context), copyTo);\n- return (TokenCountFieldMapper) fieldMapper.includeInAll(includeInAll);\n- }\n-\n- @Override\n- protected int maxPrecisionStep() {\n- return 32;\n+ return new TokenCountFieldMapper(name, fieldType, defaultFieldType,\n+ context.indexSettings(), analyzer, multiFieldsBuilder.build(this, context), copyTo);\n }\n }\n \n public static class TypeParser implements Mapper.TypeParser {\n @Override\n @SuppressWarnings(\"unchecked\")\n public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n+ if (parserContext.indexVersionCreated().before(Version.V_5_0_0)) {\n+ return new LegacyTokenCountFieldMapper.TypeParser().parse(name, node, parserContext);\n+ }\n TokenCountFieldMapper.Builder builder = new TokenCountFieldMapper.Builder(name);\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();\n@@ -108,7 +103,7 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n iterator.remove();\n }\n }\n- parseNumberField(builder, name, node, parserContext);\n+ parseField(builder, name, node, parserContext);\n if (builder.analyzer() == null) {\n throw new MapperParsingException(\"Analyzer must be set for field [\" + name + \"] but wasn't.\");\n }\n@@ -118,28 +113,32 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n \n private NamedAnalyzer analyzer;\n \n- protected TokenCountFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Explicit<Boolean> ignoreMalformed,\n- Explicit<Boolean> coerce, Settings indexSettings, NamedAnalyzer analyzer, MultiFields multiFields, CopyTo copyTo) {\n- super(simpleName, fieldType, defaultFieldType, ignoreMalformed, coerce, indexSettings, multiFields, copyTo);\n+ protected TokenCountFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, \n+ Settings indexSettings, NamedAnalyzer analyzer, MultiFields multiFields, CopyTo copyTo) {\n+ super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo);\n this.analyzer = analyzer;\n }\n \n @Override\n protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n- ValueAndBoost valueAndBoost = StringFieldMapper.parseCreateFieldForString(context, null /* Out null value is an int so we convert*/, fieldType().boost());\n- if (valueAndBoost.value() == null && fieldType().nullValue() == null) {\n- return;\n+ final String value;\n+ if (context.externalValueSet()) {\n+ value = context.externalValue().toString();\n+ } else {\n+ value = context.parser().textOrNull();\n }\n \n- if (fieldType().indexOptions() != NONE || fieldType().stored() || fieldType().hasDocValues()) {\n- int count;\n- if (valueAndBoost.value() == null) {\n- count = fieldType().nullValue();\n- } else {\n- count = countPositions(analyzer, simpleName(), valueAndBoost.value());\n- }\n- addIntegerFields(context, fields, count, valueAndBoost.boost());\n+ final int tokenCount;\n+ if (value == null) {\n+ tokenCount = (Integer) fieldType().nullValue();\n+ } else {\n+ tokenCount = countPositions(analyzer, name(), value);\n }\n+\n+ boolean indexed = fieldType().indexOptions() != IndexOptions.NONE;\n+ boolean docValued = fieldType().hasDocValues();\n+ boolean stored = fieldType().stored();\n+ fields.addAll(NumberFieldMapper.NumberType.INTEGER.createFields(fieldType().name(), tokenCount, indexed, docValued, stored));\n }\n \n /**\n@@ -186,7 +185,6 @@ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n @Override\n protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n super.doXContentBody(builder, includeDefaults, params);\n-\n builder.field(\"analyzer\", analyzer());\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapper.java", "status": "modified" }, { "diff": "@@ -81,7 +81,8 @@ public static boolean nodeBooleanValue(String name, Object node, Mapper.TypePars\n }\n }\n \n- public static void parseNumberField(NumberFieldMapper.Builder builder, String name, Map<String, Object> numberNode, Mapper.TypeParser.ParserContext parserContext) {\n+ @Deprecated // for legacy ints only\n+ public static void parseNumberField(LegacyNumberFieldMapper.Builder builder, String name, Map<String, Object> numberNode, Mapper.TypeParser.ParserContext parserContext) {\n parseField(builder, name, numberNode, parserContext);\n for (Iterator<Map.Entry<String, Object>> iterator = numberNode.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java", "status": "modified" }, { "diff": "@@ -43,8 +43,9 @@\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.index.mapper.core.DoubleFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyDoubleFieldMapper;\n import org.elasticsearch.index.mapper.core.KeywordFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyNumberFieldMapper;\n import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n import org.elasticsearch.index.mapper.object.ArrayValueMapperParser;\n import org.elasticsearch.search.DocValueFormat;\n@@ -145,25 +146,32 @@ protected Explicit<Boolean> ignoreMalformed(BuilderContext context) {\n }\n \n public abstract Y build(BuilderContext context, String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType,\n- Settings indexSettings, DoubleFieldMapper latMapper, DoubleFieldMapper lonMapper,\n+ Settings indexSettings, FieldMapper latMapper, FieldMapper lonMapper,\n KeywordFieldMapper geoHashMapper, MultiFields multiFields, Explicit<Boolean> ignoreMalformed, CopyTo copyTo);\n \n public Y build(Mapper.BuilderContext context) {\n GeoPointFieldType geoPointFieldType = (GeoPointFieldType)fieldType;\n \n- DoubleFieldMapper latMapper = null;\n- DoubleFieldMapper lonMapper = null;\n+ FieldMapper latMapper = null;\n+ FieldMapper lonMapper = null;\n \n context.path().add(name);\n if (enableLatLon) {\n- NumberFieldMapper.Builder<?, ?> latMapperBuilder = new DoubleFieldMapper.Builder(Names.LAT).includeInAll(false);\n- NumberFieldMapper.Builder<?, ?> lonMapperBuilder = new DoubleFieldMapper.Builder(Names.LON).includeInAll(false);\n- if (precisionStep != null) {\n- latMapperBuilder.precisionStep(precisionStep);\n- lonMapperBuilder.precisionStep(precisionStep);\n+ if (context.indexCreatedVersion().before(Version.V_5_0_0)) {\n+ LegacyNumberFieldMapper.Builder<?, ?> latMapperBuilder = new LegacyDoubleFieldMapper.Builder(Names.LAT).includeInAll(false);\n+ LegacyNumberFieldMapper.Builder<?, ?> lonMapperBuilder = new LegacyDoubleFieldMapper.Builder(Names.LON).includeInAll(false);\n+ if (precisionStep != null) {\n+ latMapperBuilder.precisionStep(precisionStep);\n+ lonMapperBuilder.precisionStep(precisionStep);\n+ }\n+ latMapper = (LegacyDoubleFieldMapper) latMapperBuilder.includeInAll(false).store(fieldType.stored()).docValues(false).build(context);\n+ lonMapper = (LegacyDoubleFieldMapper) lonMapperBuilder.includeInAll(false).store(fieldType.stored()).docValues(false).build(context);\n+ } else {\n+ latMapper = new NumberFieldMapper.Builder(Names.LAT, NumberFieldMapper.NumberType.DOUBLE)\n+ .includeInAll(false).store(fieldType.stored()).docValues(false).build(context);\n+ lonMapper = new NumberFieldMapper.Builder(Names.LON, NumberFieldMapper.NumberType.DOUBLE)\n+ .includeInAll(false).store(fieldType.stored()).docValues(false).build(context);\n }\n- latMapper = (DoubleFieldMapper) latMapperBuilder.includeInAll(false).store(fieldType.stored()).docValues(false).build(context);\n- lonMapper = (DoubleFieldMapper) lonMapperBuilder.includeInAll(false).store(fieldType.stored()).docValues(false).build(context);\n geoPointFieldType.setLatLonEnabled(latMapper.fieldType(), lonMapper.fieldType());\n }\n KeywordFieldMapper geoHashMapper = null;\n@@ -361,16 +369,16 @@ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZ\n }\n }\n \n- protected DoubleFieldMapper latMapper;\n+ protected FieldMapper latMapper;\n \n- protected DoubleFieldMapper lonMapper;\n+ protected FieldMapper lonMapper;\n \n protected KeywordFieldMapper geoHashMapper;\n \n protected Explicit<Boolean> ignoreMalformed;\n \n protected BaseGeoPointFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Settings indexSettings,\n- DoubleFieldMapper latMapper, DoubleFieldMapper lonMapper, KeywordFieldMapper geoHashMapper,\n+ FieldMapper latMapper, FieldMapper lonMapper, KeywordFieldMapper geoHashMapper,\n MultiFields multiFields, Explicit<Boolean> ignoreMalformed, CopyTo copyTo) {\n super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo);\n this.latMapper = latMapper;\n@@ -542,8 +550,8 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n public FieldMapper updateFieldType(Map<String, MappedFieldType> fullNameToFieldType) {\n BaseGeoPointFieldMapper updated = (BaseGeoPointFieldMapper) super.updateFieldType(fullNameToFieldType);\n KeywordFieldMapper geoUpdated = geoHashMapper == null ? null : (KeywordFieldMapper) geoHashMapper.updateFieldType(fullNameToFieldType);\n- DoubleFieldMapper latUpdated = latMapper == null ? null : (DoubleFieldMapper) latMapper.updateFieldType(fullNameToFieldType);\n- DoubleFieldMapper lonUpdated = lonMapper == null ? null : (DoubleFieldMapper) lonMapper.updateFieldType(fullNameToFieldType);\n+ FieldMapper latUpdated = latMapper == null ? null : latMapper.updateFieldType(fullNameToFieldType);\n+ FieldMapper lonUpdated = lonMapper == null ? null : lonMapper.updateFieldType(fullNameToFieldType);\n if (updated == this\n && geoUpdated == geoHashMapper\n && latUpdated == latMapper", "filename": "core/src/main/java/org/elasticsearch/index/mapper/geo/BaseGeoPointFieldMapper.java", "status": "modified" }, { "diff": "@@ -28,11 +28,11 @@\n import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.index.mapper.core.DoubleFieldMapper;\n import org.elasticsearch.index.mapper.core.KeywordFieldMapper;\n \n import java.io.IOException;\n@@ -78,8 +78,8 @@ public Builder(String name) {\n \n @Override\n public GeoPointFieldMapper build(BuilderContext context, String simpleName, MappedFieldType fieldType,\n- MappedFieldType defaultFieldType, Settings indexSettings, DoubleFieldMapper latMapper,\n- DoubleFieldMapper lonMapper, KeywordFieldMapper geoHashMapper, MultiFields multiFields, Explicit<Boolean> ignoreMalformed,\n+ MappedFieldType defaultFieldType, Settings indexSettings, FieldMapper latMapper,\n+ FieldMapper lonMapper, KeywordFieldMapper geoHashMapper, MultiFields multiFields, Explicit<Boolean> ignoreMalformed,\n CopyTo copyTo) {\n fieldType.setTokenized(false);\n if (context.indexCreatedVersion().before(Version.V_2_3_0)) {\n@@ -109,7 +109,7 @@ public static class TypeParser extends BaseGeoPointFieldMapper.TypeParser {\n }\n \n public GeoPointFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Settings indexSettings,\n- DoubleFieldMapper latMapper, DoubleFieldMapper lonMapper,\n+ FieldMapper latMapper, FieldMapper lonMapper,\n KeywordFieldMapper geoHashMapper, MultiFields multiFields, Explicit<Boolean> ignoreMalformed, CopyTo copyTo) {\n super(simpleName, fieldType, defaultFieldType, indexSettings, latMapper, lonMapper, geoHashMapper, multiFields,\n ignoreMalformed, copyTo);", "filename": "core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java", "status": "modified" }, { "diff": "@@ -38,9 +38,9 @@\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.index.mapper.core.DoubleFieldMapper;\n+import org.elasticsearch.index.mapper.CustomDocValuesField;\n+import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.core.KeywordFieldMapper;\n-import org.elasticsearch.index.mapper.core.NumberFieldMapper.CustomNumericDocValuesField;\n import org.elasticsearch.index.mapper.object.ArrayValueMapperParser;\n \n import java.io.IOException;\n@@ -108,8 +108,8 @@ protected Explicit<Boolean> coerce(BuilderContext context) {\n \n @Override\n public GeoPointFieldMapperLegacy build(BuilderContext context, String simpleName, MappedFieldType fieldType,\n- MappedFieldType defaultFieldType, Settings indexSettings, DoubleFieldMapper latMapper,\n- DoubleFieldMapper lonMapper, KeywordFieldMapper geoHashMapper, MultiFields multiFields, Explicit<Boolean> ignoreMalformed,\n+ MappedFieldType defaultFieldType, Settings indexSettings, FieldMapper latMapper,\n+ FieldMapper lonMapper, KeywordFieldMapper geoHashMapper, MultiFields multiFields, Explicit<Boolean> ignoreMalformed,\n CopyTo copyTo) {\n fieldType.setTokenized(false);\n setupFieldType(context);\n@@ -266,7 +266,7 @@ public GeoPoint decode(long latBits, long lonBits, GeoPoint out) {\n protected Explicit<Boolean> coerce;\n \n public GeoPointFieldMapperLegacy(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Settings indexSettings,\n- DoubleFieldMapper latMapper, DoubleFieldMapper lonMapper,\n+ FieldMapper latMapper, FieldMapper lonMapper,\n KeywordFieldMapper geoHashMapper, MultiFields multiFields, Explicit<Boolean> ignoreMalformed,\n Explicit<Boolean> coerce, CopyTo copyTo) {\n super(simpleName, fieldType, defaultFieldType, indexSettings, latMapper, lonMapper, geoHashMapper, multiFields,\n@@ -335,7 +335,7 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n }\n }\n \n- public static class CustomGeoPointDocValuesField extends CustomNumericDocValuesField {\n+ public static class CustomGeoPointDocValuesField extends CustomDocValuesField {\n \n private final ObjectHashSet<GeoPoint> points;\n ", "filename": "core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperLegacy.java", "status": "modified" }, { "diff": "@@ -34,7 +34,7 @@\n import org.elasticsearch.index.mapper.MetadataFieldMapper;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.SourceToParse;\n-import org.elasticsearch.index.mapper.core.LongFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyLongFieldMapper;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n@@ -51,7 +51,7 @@ public class TTLFieldMapper extends MetadataFieldMapper {\n public static final String NAME = \"_ttl\";\n public static final String CONTENT_TYPE = \"_ttl\";\n \n- public static class Defaults extends LongFieldMapper.Defaults {\n+ public static class Defaults extends LegacyLongFieldMapper.Defaults {\n public static final String NAME = TTLFieldMapper.CONTENT_TYPE;\n \n public static final TTLFieldType TTL_FIELD_TYPE = new TTLFieldType();\n@@ -127,7 +127,7 @@ public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fi\n }\n }\n \n- public static final class TTLFieldType extends LongFieldMapper.LongFieldType {\n+ public static final class TTLFieldType extends LegacyLongFieldMapper.LongFieldType {\n \n public TTLFieldType() {\n }\n@@ -226,7 +226,7 @@ protected void parseCreateField(ParseContext context, List<Field> fields) throws\n throw new AlreadyExpiredException(context.index(), context.type(), context.id(), timestamp, ttl, now);\n }\n // the expiration timestamp (timestamp + ttl) is set as field\n- fields.add(new LongFieldMapper.CustomLongNumericField(expire, fieldType()));\n+ fields.add(new LegacyLongFieldMapper.CustomLongNumericField(expire, fieldType()));\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/internal/TTLFieldMapper.java", "status": "modified" }, { "diff": "@@ -34,8 +34,8 @@\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.MetadataFieldMapper;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.index.mapper.core.DateFieldMapper;\n-import org.elasticsearch.index.mapper.core.LongFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyDateFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyLongFieldMapper;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -52,7 +52,7 @@ public class TimestampFieldMapper extends MetadataFieldMapper {\n public static final String CONTENT_TYPE = \"_timestamp\";\n public static final String DEFAULT_DATE_TIME_FORMAT = \"epoch_millis||strictDateOptionalTime\";\n \n- public static class Defaults extends DateFieldMapper.Defaults {\n+ public static class Defaults extends LegacyDateFieldMapper.Defaults {\n public static final String NAME = \"_timestamp\";\n \n // TODO: this should be removed\n@@ -86,8 +86,8 @@ public Builder(MappedFieldType existing, Settings settings) {\n }\n \n @Override\n- public DateFieldMapper.DateFieldType fieldType() {\n- return (DateFieldMapper.DateFieldType)fieldType;\n+ public LegacyDateFieldMapper.DateFieldType fieldType() {\n+ return (LegacyDateFieldMapper.DateFieldType)fieldType;\n }\n \n public Builder enabled(EnabledAttributeMapper enabledState) {\n@@ -169,7 +169,7 @@ public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fi\n }\n }\n \n- public static final class TimestampFieldType extends DateFieldMapper.DateFieldType {\n+ public static final class TimestampFieldType extends LegacyDateFieldMapper.DateFieldType {\n \n public TimestampFieldType() {}\n \n@@ -242,7 +242,7 @@ protected void parseCreateField(ParseContext context, List<Field> fields) throws\n if (enabledState.enabled) {\n long timestamp = context.sourceToParse().timestamp();\n if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) {\n- fields.add(new LongFieldMapper.CustomLongNumericField(timestamp, fieldType()));\n+ fields.add(new LegacyLongFieldMapper.CustomLongNumericField(timestamp, fieldType()));\n }\n if (fieldType().hasDocValues()) {\n fields.add(new NumericDocValuesField(fieldType().name(), timestamp));", "filename": "core/src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java", "status": "modified" }, { "diff": "@@ -19,154 +19,130 @@\n \n package org.elasticsearch.index.mapper.ip;\n \n-import org.apache.lucene.analysis.LegacyNumericTokenStream;\n import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.InetAddressPoint;\n+import org.apache.lucene.document.SortedSetDocValuesField;\n+import org.apache.lucene.document.StoredField;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexReader;\n-import org.apache.lucene.index.Terms;\n-import org.apache.lucene.search.LegacyNumericRangeQuery;\n+import org.apache.lucene.index.PointValues;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n-import org.apache.lucene.util.BytesRefBuilder;\n-import org.apache.lucene.util.LegacyNumericUtils;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.fieldstats.FieldStats;\n import org.elasticsearch.common.Explicit;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.Numbers;\n-import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.network.Cidrs;\n import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n-import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType;\n import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData;\n+import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.index.mapper.core.LongFieldMapper;\n-import org.elasticsearch.index.mapper.core.LongFieldMapper.CustomLongNumericField;\n-import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n+import org.elasticsearch.index.mapper.core.LegacyNumberFieldMapper.Defaults;\n+import org.elasticsearch.index.mapper.core.TypeParsers;\n+import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n+import org.elasticsearch.index.mapper.ip.LegacyIpFieldMapper;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.search.DocValueFormat;\n-import org.elasticsearch.search.aggregations.bucket.range.ipv4.InternalIPv4Range;\n import org.joda.time.DateTimeZone;\n \n import java.io.IOException;\n+import java.net.InetAddress;\n+import java.util.Arrays;\n import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n-import java.util.regex.Pattern;\n \n-import static org.elasticsearch.index.mapper.core.TypeParsers.parseNumberField;\n-\n-/**\n- *\n- */\n-public class IpFieldMapper extends NumberFieldMapper {\n+/** A {@link FieldMapper} for ip addresses. */\n+public class IpFieldMapper extends FieldMapper implements AllFieldMapper.IncludeInAll {\n \n public static final String CONTENT_TYPE = \"ip\";\n- public static final long MAX_IP = 4294967296L;\n-\n- public static String longToIp(long longIp) {\n- int octet3 = (int) ((longIp >> 24) % 256);\n- int octet2 = (int) ((longIp >> 16) % 256);\n- int octet1 = (int) ((longIp >> 8) % 256);\n- int octet0 = (int) ((longIp) % 256);\n- return octet3 + \".\" + octet2 + \".\" + octet1 + \".\" + octet0;\n- }\n-\n- private static final Pattern pattern = Pattern.compile(\"\\\\.\");\n-\n- public static long ipToLong(String ip) {\n- try {\n- if (!InetAddresses.isInetAddress(ip)) {\n- throw new IllegalArgumentException(\"failed to parse ip [\" + ip + \"], not a valid ip address\");\n- }\n- String[] octets = pattern.split(ip);\n- if (octets.length != 4) {\n- throw new IllegalArgumentException(\"failed to parse ip [\" + ip + \"], not a valid ipv4 address (4 dots)\");\n- }\n- return (Long.parseLong(octets[0]) << 24) + (Integer.parseInt(octets[1]) << 16) +\n- (Integer.parseInt(octets[2]) << 8) + Integer.parseInt(octets[3]);\n- } catch (Exception e) {\n- if (e instanceof IllegalArgumentException) {\n- throw (IllegalArgumentException) e;\n- }\n- throw new IllegalArgumentException(\"failed to parse ip [\" + ip + \"]\", e);\n- }\n- }\n \n- public static class Defaults extends NumberFieldMapper.Defaults {\n- public static final String NULL_VALUE = null;\n+ public static class Builder extends FieldMapper.Builder<Builder, IpFieldMapper> {\n \n- public static final MappedFieldType FIELD_TYPE = new IpFieldType();\n+ private Boolean ignoreMalformed;\n \n- static {\n- FIELD_TYPE.freeze();\n+ public Builder(String name) {\n+ super(name, new IpFieldType(), new IpFieldType());\n+ builder = this;\n }\n- }\n \n- public static class Builder extends NumberFieldMapper.Builder<Builder, IpFieldMapper> {\n-\n- protected String nullValue = Defaults.NULL_VALUE;\n+ public Builder ignoreMalformed(boolean ignoreMalformed) {\n+ this.ignoreMalformed = ignoreMalformed;\n+ return builder;\n+ }\n \n- public Builder(String name) {\n- super(name, Defaults.FIELD_TYPE, Defaults.PRECISION_STEP_64_BIT);\n- builder = this;\n+ protected Explicit<Boolean> ignoreMalformed(BuilderContext context) {\n+ if (ignoreMalformed != null) {\n+ return new Explicit<>(ignoreMalformed, true);\n+ }\n+ if (context.indexSettings() != null) {\n+ return new Explicit<>(IGNORE_MALFORMED_SETTING.get(context.indexSettings()), false);\n+ }\n+ return Defaults.IGNORE_MALFORMED;\n }\n \n @Override\n public IpFieldMapper build(BuilderContext context) {\n setupFieldType(context);\n- IpFieldMapper fieldMapper = new IpFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context), coerce(context),\n+ IpFieldMapper fieldMapper = new IpFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context),\n context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo);\n return (IpFieldMapper) fieldMapper.includeInAll(includeInAll);\n }\n-\n- @Override\n- protected int maxPrecisionStep() {\n- return 64;\n- }\n }\n \n public static class TypeParser implements Mapper.TypeParser {\n+\n+ public TypeParser() {\n+ }\n+\n @Override\n- public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n- IpFieldMapper.Builder builder = new Builder(name);\n- parseNumberField(builder, name, node, parserContext);\n+ public Mapper.Builder<?,?> parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n+ if (parserContext.indexVersionCreated().before(Version.V_5_0_0)) {\n+ return new LegacyIpFieldMapper.TypeParser().parse(name, node, parserContext);\n+ }\n+ Builder builder = new Builder(name);\n+ TypeParsers.parseField(builder, name, node, parserContext);\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();\n- String propName = Strings.toUnderscoreCase(entry.getKey());\n+ String propName = entry.getKey();\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n if (propNode == null) {\n throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n }\n- builder.nullValue(propNode.toString());\n+ builder.nullValue(InetAddresses.forString(propNode.toString()));\n+ iterator.remove();\n+ } else if (propName.equals(\"ignore_malformed\")) {\n+ builder.ignoreMalformed(TypeParsers.nodeBooleanValue(\"ignore_malformed\", propNode, parserContext));\n+ iterator.remove();\n+ } else if (TypeParsers.parseMultiField(builder, name, parserContext, propName, propNode)) {\n iterator.remove();\n }\n }\n return builder;\n }\n }\n \n- public static final class IpFieldType extends LongFieldMapper.LongFieldType {\n+ public static final class IpFieldType extends MappedFieldType {\n \n- public IpFieldType() {\n+ IpFieldType() {\n+ super();\n+ setTokenized(false);\n+ setHasDocValues(true);\n }\n \n- protected IpFieldType(IpFieldType ref) {\n- super(ref);\n+ IpFieldType(IpFieldType other) {\n+ super(other);\n }\n \n @Override\n- public NumberFieldType clone() {\n+ public MappedFieldType clone() {\n return new IpFieldType(this);\n }\n \n@@ -175,95 +151,100 @@ public String typeName() {\n return CONTENT_TYPE;\n }\n \n- /**\n- * IPs should return as a string.\n- */\n- @Override\n- public Object valueForSearch(Object value) {\n- Long val = (Long) value;\n- if (val == null) {\n- return null;\n+ private InetAddress parse(Object value) {\n+ if (value instanceof InetAddress) {\n+ return (InetAddress) value;\n+ } else {\n+ if (value instanceof BytesRef) {\n+ value = ((BytesRef) value).utf8ToString();\n+ }\n+ return InetAddresses.forString(value.toString());\n }\n- return longToIp(val);\n- }\n-\n- @Override\n- public BytesRef indexedValueForSearch(Object value) {\n- BytesRefBuilder bytesRef = new BytesRefBuilder();\n- LegacyNumericUtils.longToPrefixCoded(parseValue(value), 0, bytesRef); // 0 because of exact match\n- return bytesRef.get();\n }\n \n @Override\n public Query termQuery(Object value, @Nullable QueryShardContext context) {\n- if (value != null) {\n- String term;\n+ if (value instanceof InetAddress) {\n+ return InetAddressPoint.newExactQuery(name(), (InetAddress) value);\n+ } else {\n if (value instanceof BytesRef) {\n- term = ((BytesRef) value).utf8ToString();\n- } else {\n- term = value.toString();\n+ value = ((BytesRef) value).utf8ToString();\n }\n- long[] fromTo;\n- // assume that the term is either a CIDR range or the\n- // term is a single IPv4 address; if either of these\n- // assumptions is wrong, the CIDR parsing will fail\n- // anyway, and that is okay\n+ String term = value.toString();\n if (term.contains(\"/\")) {\n- // treat the term as if it is in CIDR notation\n- fromTo = Cidrs.cidrMaskToMinMax(term);\n- } else {\n- // treat the term as if it is a single IPv4, and\n- // apply a CIDR mask equivalent to the host route\n- fromTo = Cidrs.cidrMaskToMinMax(term + \"/32\");\n- }\n- if (fromTo != null) {\n- return rangeQuery(fromTo[0] == 0 ? null : fromTo[0],\n- fromTo[1] == InternalIPv4Range.MAX_IP ? null : fromTo[1], true, false);\n+ String[] fields = term.split(\"/\");\n+ if (fields.length == 2) {\n+ InetAddress address = InetAddresses.forString(fields[0]);\n+ int prefixLength = Integer.parseInt(fields[1]);\n+ return InetAddressPoint.newPrefixQuery(name(), address, prefixLength);\n+ } else {\n+ throw new IllegalArgumentException(\"Expected [ip/prefix] but was [\" + term + \"]\");\n+ }\n }\n+ InetAddress address = InetAddresses.forString(term);\n+ return InetAddressPoint.newExactQuery(name(), address);\n }\n- return super.termQuery(value, context);\n }\n \n @Override\n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper) {\n- return LegacyNumericRangeQuery.newLongRange(name(), numericPrecisionStep(),\n- lowerTerm == null ? null : parseValue(lowerTerm),\n- upperTerm == null ? null : parseValue(upperTerm),\n- includeLower, includeUpper);\n+ if (includeLower == false || includeUpper == false) {\n+ // TODO: should we drop range support entirely\n+ throw new IllegalArgumentException(\"range on ip addresses only supports inclusive bounds\");\n+ }\n+ InetAddress lower;\n+ if (lowerTerm == null) {\n+ lower = InetAddressPoint.decode(new byte[16]);\n+ } else {\n+ lower = parse(lowerTerm);\n+ }\n+\n+ InetAddress upper;\n+ if (upperTerm == null) {\n+ byte[] bytes = new byte[16];\n+ Arrays.fill(bytes, (byte) 255); \n+ upper = InetAddressPoint.decode(bytes);\n+ } else {\n+ upper = parse(upperTerm);\n+ }\n+\n+ return InetAddressPoint.newRangeQuery(name(), lower, upper);\n }\n \n @Override\n public Query fuzzyQuery(Object value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions) {\n- long iValue = parseValue(value);\n- long iSim;\n- try {\n- iSim = ipToLong(fuzziness.asString());\n- } catch (IllegalArgumentException e) {\n- iSim = fuzziness.asLong();\n- }\n- return LegacyNumericRangeQuery.newLongRange(name(), numericPrecisionStep(),\n- iValue - iSim,\n- iValue + iSim,\n- true, true);\n+ InetAddress base = parse(value);\n+ int mask = fuzziness.asInt();\n+ return InetAddressPoint.newPrefixQuery(name(), base, mask);\n }\n \n @Override\n- public FieldStats stats(IndexReader reader) throws IOException {\n- int maxDoc = reader.maxDoc();\n- Terms terms = org.apache.lucene.index.MultiFields.getTerms(reader, name());\n- if (terms == null) {\n+ public FieldStats.Ip stats(IndexReader reader) throws IOException {\n+ String field = name();\n+ long size = PointValues.size(reader, field);\n+ if (size == 0) {\n return null;\n }\n- long minValue = LegacyNumericUtils.getMinLong(terms);\n- long maxValue = LegacyNumericUtils.getMaxLong(terms);\n- return new FieldStats.Ip(maxDoc, terms.getDocCount(), terms.getSumDocFreq(),\n- terms.getSumTotalTermFreq(), minValue, maxValue);\n+ int docCount = PointValues.getDocCount(reader, field);\n+ byte[] min = PointValues.getMinPackedValue(reader, field);\n+ byte[] max = PointValues.getMaxPackedValue(reader, field);\n+ return new FieldStats.Ip(reader.maxDoc(),docCount, -1L, size,\n+ InetAddressPoint.decode(min),\n+ InetAddressPoint.decode(max));\n }\n \n @Override\n public IndexFieldData.Builder fielddataBuilder() {\n failIfNoDocValues();\n- return new DocValuesIndexFieldData.Builder().numericType(NumericType.LONG);\n+ return new DocValuesIndexFieldData.Builder();\n+ }\n+\n+ @Override\n+ public Object valueForSearch(Object value) {\n+ if (value == null) {\n+ return null;\n+ }\n+ return DocValueFormat.IP.format((BytesRef) value);\n }\n \n @Override\n@@ -279,79 +260,139 @@ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZ\n }\n }\n \n- protected IpFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType,\n- Explicit<Boolean> ignoreMalformed, Explicit<Boolean> coerce,\n- Settings indexSettings, MultiFields multiFields, CopyTo copyTo) {\n- super(simpleName, fieldType, defaultFieldType, ignoreMalformed, coerce, indexSettings, multiFields, copyTo);\n+ private Boolean includeInAll;\n+\n+ private Explicit<Boolean> ignoreMalformed;\n+\n+ private IpFieldMapper(\n+ String simpleName,\n+ MappedFieldType fieldType,\n+ MappedFieldType defaultFieldType,\n+ Explicit<Boolean> ignoreMalformed,\n+ Settings indexSettings,\n+ MultiFields multiFields,\n+ CopyTo copyTo) {\n+ super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo);\n+ this.ignoreMalformed = ignoreMalformed;\n }\n \n- private static long parseValue(Object value) {\n- if (value instanceof Number) {\n- return ((Number) value).longValue();\n+ @Override\n+ public IpFieldType fieldType() {\n+ return (IpFieldType) super.fieldType();\n+ }\n+\n+ @Override\n+ protected String contentType() {\n+ return fieldType.typeName();\n+ }\n+\n+ @Override\n+ protected IpFieldMapper clone() {\n+ return (IpFieldMapper) super.clone();\n+ }\n+\n+ @Override\n+ public Mapper includeInAll(Boolean includeInAll) {\n+ if (includeInAll != null) {\n+ IpFieldMapper clone = clone();\n+ clone.includeInAll = includeInAll;\n+ return clone;\n+ } else {\n+ return this;\n }\n- if (value instanceof BytesRef) {\n- return ipToLong(((BytesRef) value).utf8ToString());\n+ }\n+\n+ @Override\n+ public Mapper includeInAllIfNotSet(Boolean includeInAll) {\n+ if (includeInAll != null && this.includeInAll == null) {\n+ IpFieldMapper clone = clone();\n+ clone.includeInAll = includeInAll;\n+ return clone;\n+ } else {\n+ return this;\n }\n- return ipToLong(value.toString());\n }\n \n @Override\n- protected void innerParseCreateField(ParseContext context, List<Field> fields) throws IOException {\n- String ipAsString;\n+ public Mapper unsetIncludeInAll() {\n+ if (includeInAll != null) {\n+ IpFieldMapper clone = clone();\n+ clone.includeInAll = null;\n+ return clone;\n+ } else {\n+ return this;\n+ }\n+ }\n+\n+ @Override\n+ protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n+ Object addressAsObject;\n if (context.externalValueSet()) {\n- ipAsString = (String) context.externalValue();\n- if (ipAsString == null) {\n- ipAsString = fieldType().nullValueAsString();\n- }\n+ addressAsObject = context.externalValue();\n } else {\n- if (context.parser().currentToken() == XContentParser.Token.VALUE_NULL) {\n- ipAsString = fieldType().nullValueAsString();\n- } else {\n- ipAsString = context.parser().text();\n- }\n+ addressAsObject = context.parser().text();\n+ }\n+\n+ if (addressAsObject == null) {\n+ addressAsObject = fieldType().nullValue();\n }\n \n- if (ipAsString == null) {\n+ if (addressAsObject == null) {\n return;\n }\n+\n+ String addressAsString = addressAsObject.toString();\n+ InetAddress address;\n+ if (addressAsObject instanceof InetAddress) {\n+ address = (InetAddress) addressAsObject;\n+ } else {\n+ try {\n+ address = InetAddresses.forString(addressAsString);\n+ } catch (IllegalArgumentException e) {\n+ if (ignoreMalformed.value()) {\n+ return;\n+ } else {\n+ throw e;\n+ }\n+ }\n+ }\n+\n if (context.includeInAll(includeInAll, this)) {\n- context.allEntries().addText(fieldType().name(), ipAsString, fieldType().boost());\n+ context.allEntries().addText(fieldType().name(), addressAsString, fieldType().boost());\n }\n \n- final long value = ipToLong(ipAsString);\n- if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) {\n- CustomLongNumericField field = new CustomLongNumericField(value, fieldType());\n- if (fieldType.boost() != 1f && Version.indexCreated(context.indexSettings()).before(Version.V_5_0_0_alpha1)) {\n- field.setBoost(fieldType().boost());\n- }\n- fields.add(field);\n+ if (fieldType().indexOptions() != IndexOptions.NONE) {\n+ fields.add(new InetAddressPoint(fieldType().name(), address));\n }\n if (fieldType().hasDocValues()) {\n- addDocValue(context, fields, value);\n+ fields.add(new SortedSetDocValuesField(fieldType().name(), new BytesRef(InetAddressPoint.encode(address))));\n+ }\n+ if (fieldType().stored()) {\n+ fields.add(new StoredField(fieldType().name(), new BytesRef(InetAddressPoint.encode(address))));\n }\n }\n \n @Override\n- protected String contentType() {\n- return CONTENT_TYPE;\n+ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n+ super.doMerge(mergeWith, updateAllTypes);\n+ IpFieldMapper other = (IpFieldMapper) mergeWith;\n+ this.includeInAll = other.includeInAll;\n+ if (other.ignoreMalformed.explicit()) {\n+ this.ignoreMalformed = other.ignoreMalformed;\n+ }\n }\n \n @Override\n protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n super.doXContentBody(builder, includeDefaults, params);\n \n- if (includeDefaults || fieldType().numericPrecisionStep() != Defaults.PRECISION_STEP_64_BIT) {\n- builder.field(\"precision_step\", fieldType().numericPrecisionStep());\n- }\n- if (includeDefaults || fieldType().nullValueAsString() != null) {\n- builder.field(\"null_value\", fieldType().nullValueAsString());\n+ if (includeDefaults || ignoreMalformed.explicit()) {\n+ builder.field(\"ignore_malformed\", ignoreMalformed.value());\n }\n if (includeInAll != null) {\n builder.field(\"include_in_all\", includeInAll);\n } else if (includeDefaults) {\n builder.field(\"include_in_all\", false);\n }\n-\n }\n-\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java", "status": "modified" } ] }
{ "body": "I'm trying to get `inner_hits` to work on an a two level deep `has_child` query, but the grandchild hits just seem to return an empty array. If I specify the inner hits at the top level I get the results I want, but its rather too slow. Using inner hits on the has child query seems much faster, but doesn't return the grandchild hits.\n\nBelow is an example query:\n\n``` json\n{\n \"from\" : 0,\n \"size\" : 25,\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"filtered\" : {\n \"query\" : {\n \"multi_match\" : {\n \"query\" : \"asia\",\n \"fields\" : [ \"_search\" ],\n \"operator\" : \"AND\",\n \"analyzer\" : \"library_synonyms\",\n \"fuzziness\" : \"1\"\n }\n },\n \"filter\" : {\n \"and\" : {\n \"filters\" : [ {\n \"terms\" : {\n \"range\" : [ \"Global\" ]\n }\n } ]\n }\n }\n }\n },\n \"child_type\" : \"document-ref\",\n \"inner_hits\" : {\n \"name\" : \"document-ref\"\n }\n }\n },\n \"child_type\" : \"class\",\n \"inner_hits\" : {\n \"size\" : 1000,\n \"_source\" : false,\n \"fielddata_fields\" : [ \"class\" ],\n \"name\" : \"class\"\n }\n }\n },\n \"fielddata_fields\" : [ \"name\" ]\n}\n```\n\nThe `document-ref` inner hits just always returns an empty array. Should this work (and, if so, any ideas why it isn't?), or is it beyond the means of what inner hits can currently do?\n", "comments": [ { "body": "I've created a simpler test case for this, and it seems a little clearer that this doesn't currently work.\n\n**Add mappings:**\n\n```\ncurl -XPOST 'http://localhost:9200/grandchildren' -d '{\n \"mappings\" : {\n \"parent\" : {\n \"properties\" : {\n \"parent-name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n },\n \"child\" : {\n \"_parent\" : {\n \"type\" : \"parent\"\n },\n \"_routing\" : {\n \"required\" : true\n },\n \"properties\" : {\n \"parent-name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n },\n \"grandchild\" : {\n \"_parent\" : {\n \"type\" : \"child\"\n },\n \"_routing\" : {\n \"required\" : true\n },\n \"properties\" : {\n \"grandchild-name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n }\n }\n}'\n```\n\n**Populate:**\n\n```\ncurl -XPOST 'http://localhost:9200/grandchildren/parent/parent' -d '{ \"parent-name\" : \"Parent\" }'\ncurl -XPOST 'http://localhost:9200/grandchildren/child/child?parent=parent&routing=parent' -d '{ \"child-name\" : \"Child\" }'\ncurl -XPOST 'http://localhost:9200/grandchildren/grandchild/grandchild?parent=child&routing=parent' -d '{ \"grandchild-name\" : \"Grandchild\" }'\n```\n\n**Query:**\n\n```\ncurl -XGET 'http://localhost:9200/grandchildren/_search?pretty' -d '{\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"match_all\" : {}\n },\n \"child_type\" : \"grandchild\",\n \"inner_hits\" : {\n \"name\" : \"grandchild\"\n }\n }\n },\n \"child_type\" : \"child\",\n \"inner_hits\" : {\n \"name\" : \"child\"\n }\n }\n }\n}'\n```\n\n**Result:**\n\n```\n{\n \"took\" : 2,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"parent\",\n \"_id\" : \"parent\",\n \"_score\" : 1.0,\n \"_source\":{ \"parent-name\" : \"Parent\" },\n \"inner_hits\" : {\n \"grandchild\" : {\n \"hits\" : {\n \"total\" : 0,\n \"max_score\" : null,\n \"hits\" : [ ]\n }\n },\n \"child\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"child\",\n \"_id\" : \"child\",\n \"_score\" : 1.0,\n \"_source\":{ \"child-name\" : \"Child\" }\n } ]\n }\n }\n }\n } ]\n }\n}\n```\n\nNot only is the grandchild hits empty, but they're also not nested within the child hits, so aren't going to give me what I want anyway. I'm not sure what the intended/expected behaviour would be here, but guess I need to try something else for now.\n", "created_at": "2015-05-12T13:52:39Z" }, { "body": "Above with inner hits at top-level:\n\n**Query:**\n\n```\ncurl -XGET 'http://localhost:9200/grandchildren/_search?pretty' -d '{\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"match_all\" : {}\n },\n \"child_type\" : \"grandchild\"\n }\n },\n \"child_type\" : \"child\"\n }\n },\n \"inner_hits\" : {\n \"child\" : {\n \"type\" : {\n \"child\" : {\n \"inner_hits\" : {\n \"child\" : {\n \"type\" : {\n \"grandchild\" : {}\n }\n }\n }\n }\n }\n }\n }\n}'\n```\n\n**Result:**\n\n```\n{\n \"took\" : 3,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"parent\",\n \"_id\" : \"parent\",\n \"_score\" : 1.0,\n \"_source\":{ \"parent-name\" : \"Parent\" },\n \"inner_hits\" : {\n \"child\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"child\",\n \"_id\" : \"child\",\n \"_score\" : 1.0,\n \"_source\":{ \"child-name\" : \"Child\" },\n \"inner_hits\" : {\n \"child\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"grandchild\",\n \"_id\" : \"grandchild\",\n \"_score\" : 1.0,\n \"_source\":{ \"grandchild-name\" : \"Grandchild\" }\n } ]\n }\n }\n }\n } ]\n }\n }\n }\n } ]\n }\n}\n```\n\nHere the grandchild inner hit is correctly found, and nested\n", "created_at": "2015-05-12T14:03:05Z" }, { "body": "[I'm using 1.5.2]\n", "created_at": "2015-05-12T14:12:57Z" }, { "body": "I managed to track down why this wasn't working. When parsing 'has child' queries, inner hits were always being added to the `parseContext`, and never as child inner hits to the parent inner hits.\n\nI've put together a very quick and dirty fix for this (https://github.com/lukens/elasticsearch/commit/fb22d622e7b24074b5f4fe3e26cffd4c38cff75b) which has got it working for my case, but I don't think is suitable for inclusion in a release.\n\nIssues I see with my with my fix:\n1. It's just messy, it doesn't really fix the issue, but just cleans up after it. Children still add their inner hits to the `parseContext`, the parent just removes them again afterwards, and adds them as children to its own inner hits.\n2. The parent removes them again afterwards by mutating a Map it gets from the current `SearchContext`'s `InnerHitsContext`. This is obviously bad and messy, and would be broken if `InnerHitsContext` was changed to return a copy of the map, rather than the map itself. Nasty dependencies between classes.\n3. I think a child can specify inner hits even if a parent doesn't. These would currently get lost. I'm not sure what the behaviour should be here, it should probably be considered an invalid query.\n4. If this was done properly, descendants at different levels could add inner hits with the same name, whereas currently this could cause issues. Maybe you shouldn't be able to have the same name at different levels, but if implemented correctly, there should be no need to enforce this. \n\nI considered submitting as a pull request, but felt it was far too rough and ready.\n", "created_at": "2015-05-13T12:44:35Z" }, { "body": "@lukens If you want nested inner hits then you need to use the top level inner hits, the inner_hits on a query doesn't support nesting. The fact that your grandparents inner hits is empty is clearly a bug, thanks for bringing this up!\n", "created_at": "2015-05-13T16:41:55Z" }, { "body": "Hi, it's grandchild, rather than grandparent, that isn't working. Though you also seem to be suggesting it shouldn't be. Either way, the change I committed shows that it can work, my change just isn't a very nice way to make it work.\n", "created_at": "2015-05-14T09:30:04Z" }, { "body": "Ah, or are you saying it should work, but just shouldn't be nested? I'm not really sure what the point of it would be if it wasn't nested, though that may just be because it doesn't fit my use case, and I can't think of a use case where it would be useful.\n\nI think it would be good if nesting did work, as that would still allow either use case, really.\n\nThe problem with top level inner_hits is that I have to apply the query once in the has_child query, and then again in the inner_hits query, which makes everything slower than it would otherwise need to be.\n", "created_at": "2015-05-14T09:36:15Z" }, { "body": "@lukens yes, I meant grandchild. The reason it is a bug is, because the inner_hits in your response shouldn't be empty.\n\nThe top level inner hits and inner hits defined on a query internally to ES is the same thing and either way of defining inner hits will yield the same performance in terms of query time. The nested inner hits support in the query dsl was left out to reduce complexity and most of the times there is just a single level relationship. Obviously that means for your use case that you need to use top level inner hits. \n\nMaybe the inner hits support in the query dsl should support multi level relationships too, but I think the parsing logic shouldn't be get super complex because this. I need to think more about this. Like you said if it the grandchild isn't nested its hits in the response, then it isn't very helpful.\n\nThe only overhead of top level inner hits is that queries are defined twice, so the request body gets larger. If you're concerned with that, you can consider using search templates, so that you don't you reduce the amount of data send to your cluster.\n", "created_at": "2015-05-14T10:20:30Z" }, { "body": "Isn't there also an overhead in running the query twice with the top level inner_hits, or does Elasticsearch do something clever so that it only gets run once? (and, if so, does it need to be specified again in the inner hits?)\n\nI've not yet come across search templates, are these compatible with highly dynamic searches?\n\nI'd like try and spend a bit more time on getting nesting working in a less hacky manner, but am under enormous pressure just to get a project completed at the moment. For the case when grandchildren aren't nested in the query, I guess the simple solution is to also not nest them in the response. I don't think this would overcomplicate the parsing too much.\n\nI have a fairly nice way to handle all this in my mind, just not the time to implement it at the moment.\n", "created_at": "2015-05-14T10:46:55Z" }, { "body": "OK, switching to top level hits doesn't seem to have affected performance, so I can work with that for now. It had seemed much slower before, but once I'd actually got inner_hits on the query working, that ended up just as slow, until I tweaked some other things.\n\nThe \"fix\" I've committed above is probably only really of any use as a reference if you want to go with nesting, as it doesn't solve the problem when not nested. I expect there's something in the inner workings of inner_hits that currently relies on them being nested.\n", "created_at": "2015-05-14T11:12:21Z" }, { "body": "> Isn't there also an overhead in running the query twice with the top level inner_hits, or does Elasticsearch do something clever so that it only gets run once? (and, if so, does it need to be specified again in the inner hits?)\n\ninner_hits runs as part of the fetch phase and always executes an additional search to fetch the inner hits. The search being executed is cheap. It only runs an a single shard and just runs a query that fetches top child docs that matches with `_parent:[parent_id]` (all docs associated with parent `parent_id`) and the inner query defined in the `has_child` query. This is a query that ES (actually Lucene) can execute relatively quickly. This mini search is executed for each hit being returned.\n\n> I've not yet come across search templates, are these compatible with highly dynamic searches?\n\nYes, the dynamic part of the search request can be templated.\n\n> The \"fix\" I've committed above is probably only really of any use as a reference if you want to go with nesting, as it doesn't solve the problem when not nested. I expect there's something in the inner workings of inner_hits that currently relies on them being nested.\n\nYes, the inner hits features relies on the fact that grandchild and child are nested. When using the top-level inner hits notation this works out, but when using inner_hits as part of the query dsl this doesn't work out, because grandchild inner hit definition isn't nested under the child inner hit definition.\n\nIn order to fix this properly the query dsl parsing logic should just support nested inner hits. I think the format doesn't need to change in order to support this. Just because the fact the two `has_child` queries are nested should be enough for automatically nest the two inner hit definitions.\n", "created_at": "2015-05-15T09:55:48Z" }, { "body": "Allso seeing this issue, in my case multi-level nested documents (as in https://github.com/elastic/elasticsearch/issues/13064). Would be great to get a solution to this.\n", "created_at": "2015-09-01T23:28:25Z" }, { "body": "Will this issue be fixed at 2.x? Really looking forward to see the nested inner hits in query dsl.\n", "created_at": "2015-11-02T11:08:31Z" }, { "body": "When will this be fixed, its a blocker for us !! \n", "created_at": "2015-11-26T13:49:40Z" }, { "body": "+1\n", "created_at": "2015-12-20T13:30:05Z" }, { "body": "+1\n", "created_at": "2016-01-09T11:29:34Z" }, { "body": "@martijnvg just pinging you about this as a reminder\n", "created_at": "2016-01-18T19:16:53Z" }, { "body": "+1\n", "created_at": "2016-02-01T13:21:45Z" }, { "body": "+1\n", "created_at": "2016-03-01T09:54:42Z" }, { "body": "Hi Everyone,\r\nwhere is document data ", "created_at": "2017-04-27T05:37:57Z" }, { "body": "It looks like this issue is still unsolved, at least in elasticsearch 7.1", "created_at": "2020-02-28T03:06:16Z" } ], "number": 11118, "title": "has_child and inner_hits for grandchild hit doesn't work" }
{ "body": "- Inner hits are now only provided and prepared in the constructors of the `nested`, `has_child` and `has_parent` queries. This will make fixing #11118 easier and then allow us to drop the top level inner hit syntax.\n- Also made `score_mode` a required constructor parameter. (`fromXContent(...)` method maintain their defaults)\n- Moved has_child's min_child/max_children validation from `doToQuery(...)` to a setter. (so we can fail sooner on the coordinating node)\n", "number": 17719, "review_comments": [ { "body": "QueryBuilders should always set the defaults since they can be constructed without ever passing through the parser (Java API)\n", "created_at": "2016-04-14T09:17:22Z" }, { "body": "QueryBuilders should always set the defaults since they can be constructed without ever passing through the parser (Java API)\n", "created_at": "2016-04-14T09:19:15Z" }, { "body": "QueryBuilders should always set the defaults since they can be constructed without ever passing through the parser (Java API)\n", "created_at": "2016-04-14T09:19:55Z" }, { "body": "- I think score_mode should be a constructor parameter, as it is important to always think about that.\n- I'll move inner hits to a setter.\n- I'll bring back the defaults for min and max kids.\n", "created_at": "2016-04-14T09:32:59Z" } ], "title": "Cleanup nested, has_child & has_parent query builders for inner hits construction" }
{ "commits": [ { "message": "Cleanup query builder for inner hits construction.\n\n* Inner hits can now only be provided and prepared via setter in the nested, has_child and has_parent query.\n* Also made `score_mode` a required constructor parameter.\n* Moved has_child's min_child/max_children validation from doToQuery(...) to a setter." } ], "files": [ { "diff": "@@ -272,4 +272,12 @@ public final QueryBuilder<?> rewrite(QueryRewriteContext queryShardContext) thro\n protected QueryBuilder<?> doRewrite(QueryRewriteContext queryShardContext) throws IOException {\n return this;\n }\n+\n+ // Like Objects.requireNotNull(...) but instead throws a IllegalArgumentException\n+ protected static <T> T requireValue(T value, String message) {\n+ if (value == null) {\n+ throw new IllegalArgumentException(message);\n+ }\n+ return value;\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/AbstractQueryBuilder.java", "status": "modified" }, { "diff": "@@ -45,7 +45,7 @@\n import java.util.Objects;\n \n /**\n- * A query builder for <tt>has_child</tt> queries.\n+ * A query builder for <tt>has_child</tt> query.\n */\n public class HasChildQueryBuilder extends AbstractQueryBuilder<HasChildQueryBuilder> {\n \n@@ -68,10 +68,6 @@ public class HasChildQueryBuilder extends AbstractQueryBuilder<HasChildQueryBuil\n * The default value for ignore_unmapped.\n */\n public static final boolean DEFAULT_IGNORE_UNMAPPED = false;\n- /*\n- * The default score mode that is used to combine score coming from multiple parent documents.\n- */\n- public static final ScoreMode DEFAULT_SCORE_MODE = ScoreMode.None;\n \n private static final ParseField QUERY_FIELD = new ParseField(\"query\", \"filter\");\n private static final ParseField TYPE_FIELD = new ParseField(\"type\", \"child_type\");\n@@ -82,42 +78,25 @@ public class HasChildQueryBuilder extends AbstractQueryBuilder<HasChildQueryBuil\n private static final ParseField IGNORE_UNMAPPED_FIELD = new ParseField(\"ignore_unmapped\");\n \n private final QueryBuilder<?> query;\n-\n private final String type;\n-\n- private ScoreMode scoreMode = DEFAULT_SCORE_MODE;\n-\n+ private final ScoreMode scoreMode;\n+ private InnerHitBuilder innerHitBuilder;\n private int minChildren = DEFAULT_MIN_CHILDREN;\n-\n private int maxChildren = DEFAULT_MAX_CHILDREN;\n-\n- private InnerHitBuilder innerHitBuilder;\n-\n private boolean ignoreUnmapped = false;\n \n+ public HasChildQueryBuilder(String type, QueryBuilder<?> query, ScoreMode scoreMode) {\n+ this(type, query, DEFAULT_MIN_CHILDREN, DEFAULT_MAX_CHILDREN, scoreMode, null);\n+ }\n \n- public HasChildQueryBuilder(String type, QueryBuilder<?> query, int maxChildren, int minChildren, ScoreMode scoreMode,\n+ private HasChildQueryBuilder(String type, QueryBuilder<?> query, int minChildren, int maxChildren, ScoreMode scoreMode,\n InnerHitBuilder innerHitBuilder) {\n- this(type, query);\n- scoreMode(scoreMode);\n- this.maxChildren = maxChildren;\n- this.minChildren = minChildren;\n+ this.type = requireValue(type, \"[\" + NAME + \"] requires 'type' field\");\n+ this.query = requireValue(query, \"[\" + NAME + \"] requires 'query' field\");\n+ this.scoreMode = requireValue(scoreMode, \"[\" + NAME + \"] requires 'score_mode' field\");\n this.innerHitBuilder = innerHitBuilder;\n- if (this.innerHitBuilder != null) {\n- this.innerHitBuilder.setParentChildType(type);\n- this.innerHitBuilder.setQuery(query);\n- }\n- }\n-\n- public HasChildQueryBuilder(String type, QueryBuilder<?> query) {\n- if (type == null) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires 'type' field\");\n- }\n- if (query == null) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires 'query' field\");\n- }\n- this.type = type;\n- this.query = query;\n+ this.minChildren = minChildren;\n+ this.maxChildren = maxChildren;\n }\n \n /**\n@@ -137,64 +116,47 @@ public HasChildQueryBuilder(StreamInput in) throws IOException {\n @Override\n protected void doWriteTo(StreamOutput out) throws IOException {\n out.writeString(type);\n- out.writeInt(minChildren());\n- out.writeInt(maxChildren());\n+ out.writeInt(minChildren);\n+ out.writeInt(maxChildren);\n out.writeVInt(scoreMode.ordinal());\n out.writeQuery(query);\n out.writeOptionalWriteable(innerHitBuilder);\n out.writeBoolean(ignoreUnmapped);\n }\n \n /**\n- * Defines how the scores from the matching child documents are mapped into the parent document.\n+ * Defines the minimum number of children that are required to match for the parent to be considered a match and\n+ * the maximum number of children that are required to match for the parent to be considered a match.\n */\n- public HasChildQueryBuilder scoreMode(ScoreMode scoreMode) {\n- if (scoreMode == null) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires 'score_mode' field\");\n- }\n- this.scoreMode = scoreMode;\n- return this;\n- }\n-\n- /**\n- * Defines the minimum number of children that are required to match for the parent to be considered a match.\n- */\n- public HasChildQueryBuilder minChildren(int minChildren) {\n+ public HasChildQueryBuilder minMaxChildren(int minChildren, int maxChildren) {\n if (minChildren < 0) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires non-negative 'min_children' field\");\n+ throw new IllegalArgumentException(\"[\" + NAME + \"] requires non-negative 'min_children' field\");\n }\n- this.minChildren = minChildren;\n- return this;\n- }\n-\n- /**\n- * Defines the maximum number of children that are required to match for the parent to be considered a match.\n- */\n- public HasChildQueryBuilder maxChildren(int maxChildren) {\n if (maxChildren < 0) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires non-negative 'max_children' field\");\n+ throw new IllegalArgumentException(\"[\" + NAME + \"] requires non-negative 'max_children' field\");\n }\n+ if (maxChildren < minChildren) {\n+ throw new IllegalArgumentException(\"[\" + NAME + \"] 'max_children' is less than 'min_children'\");\n+ }\n+ this.minChildren = minChildren;\n this.maxChildren = maxChildren;\n return this;\n }\n \n- /**\n- * Sets the query name for the filter that can be used when searching for matched_filters per hit.\n- */\n- public HasChildQueryBuilder innerHit(InnerHitBuilder innerHitBuilder) {\n- this.innerHitBuilder = Objects.requireNonNull(innerHitBuilder);\n- this.innerHitBuilder.setParentChildType(type);\n- this.innerHitBuilder.setQuery(query);\n- return this;\n- }\n-\n /**\n * Returns inner hit definition in the scope of this query and reusing the defined type and query.\n */\n public InnerHitBuilder innerHit() {\n return innerHitBuilder;\n }\n \n+ public HasChildQueryBuilder innerHit(InnerHitBuilder innerHit) {\n+ innerHit.setParentChildType(type);\n+ innerHit.setQuery(query);\n+ this.innerHitBuilder = innerHit;\n+ return this;\n+ }\n+\n /**\n * Returns the children query to execute.\n */\n@@ -270,7 +232,7 @@ public static HasChildQueryBuilder fromXContent(QueryParseContext parseContext)\n XContentParser parser = parseContext.parser();\n float boost = AbstractQueryBuilder.DEFAULT_BOOST;\n String childType = null;\n- ScoreMode scoreMode = HasChildQueryBuilder.DEFAULT_SCORE_MODE;\n+ ScoreMode scoreMode = ScoreMode.None;\n int minChildren = HasChildQueryBuilder.DEFAULT_MIN_CHILDREN;\n int maxChildren = HasChildQueryBuilder.DEFAULT_MAX_CHILDREN;\n boolean ignoreUnmapped = DEFAULT_IGNORE_UNMAPPED;\n@@ -312,7 +274,7 @@ public static HasChildQueryBuilder fromXContent(QueryParseContext parseContext)\n }\n }\n }\n- HasChildQueryBuilder hasChildQueryBuilder = new HasChildQueryBuilder(childType, iqb, maxChildren, minChildren,\n+ HasChildQueryBuilder hasChildQueryBuilder = new HasChildQueryBuilder(childType, iqb, minChildren, maxChildren,\n scoreMode, innerHitBuilder);\n hasChildQueryBuilder.queryName(queryName);\n hasChildQueryBuilder.boost(boost);\n@@ -386,10 +348,6 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n \"[\" + NAME + \"] Type [\" + type + \"] points to a non existent parent type [\" + parentType + \"]\");\n }\n \n- if (maxChildren > 0 && maxChildren < minChildren) {\n- throw new QueryShardException(context, \"[\" + NAME + \"] 'max_children' is less than 'min_children'\");\n- }\n-\n // wrap the query with type query\n innerQuery = Queries.filtered(innerQuery, childDocMapper.typeFilter());\n \n@@ -502,7 +460,7 @@ protected boolean doEquals(HasChildQueryBuilder that) {\n && Objects.equals(scoreMode, that.scoreMode)\n && Objects.equals(minChildren, that.minChildren)\n && Objects.equals(maxChildren, that.maxChildren)\n- && Objects.equals(innerHitBuilder, that.innerHitBuilder) \n+ && Objects.equals(innerHitBuilder, that.innerHitBuilder)\n && Objects.equals(ignoreUnmapped, that.ignoreUnmapped);\n }\n \n@@ -515,12 +473,7 @@ protected int doHashCode() {\n protected QueryBuilder<?> doRewrite(QueryRewriteContext queryRewriteContext) throws IOException {\n QueryBuilder<?> rewrite = query.rewrite(queryRewriteContext);\n if (rewrite != query) {\n- HasChildQueryBuilder hasChildQueryBuilder = new HasChildQueryBuilder(type, rewrite);\n- hasChildQueryBuilder.minChildren(minChildren);\n- hasChildQueryBuilder.maxChildren(maxChildren);\n- hasChildQueryBuilder.scoreMode(scoreMode);\n- hasChildQueryBuilder.innerHit(innerHitBuilder);\n- return hasChildQueryBuilder;\n+ return new HasChildQueryBuilder(type, rewrite, minChildren, minChildren, scoreMode, innerHitBuilder);\n }\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/HasChildQueryBuilder.java", "status": "modified" }, { "diff": "@@ -48,8 +48,6 @@ public class HasParentQueryBuilder extends AbstractQueryBuilder<HasParentQueryBu\n public static final String NAME = \"has_parent\";\n public static final ParseField QUERY_NAME_FIELD = new ParseField(NAME);\n \n- public static final boolean DEFAULT_SCORE = false;\n-\n /**\n * The default value for ignore_unmapped.\n */\n@@ -64,33 +62,19 @@ public class HasParentQueryBuilder extends AbstractQueryBuilder<HasParentQueryBu\n \n private final QueryBuilder<?> query;\n private final String type;\n- private boolean score = DEFAULT_SCORE;\n+ private final boolean score;\n private InnerHitBuilder innerHit;\n private boolean ignoreUnmapped = false;\n \n- /**\n- * @param type The parent type\n- * @param query The query that will be matched with parent documents\n- */\n- public HasParentQueryBuilder(String type, QueryBuilder<?> query) {\n- if (type == null) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires 'parent_type' field\");\n- }\n- if (query == null) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires 'query' field\");\n- }\n- this.type = type;\n- this.query = query;\n+ public HasParentQueryBuilder(String type, QueryBuilder<?> query, boolean score) {\n+ this(type, query, score, null);\n }\n \n- public HasParentQueryBuilder(String type, QueryBuilder<?> query, boolean score, InnerHitBuilder innerHit) {\n- this(type, query);\n+ private HasParentQueryBuilder(String type, QueryBuilder<?> query, boolean score, InnerHitBuilder innerHit) {\n+ this.type = requireValue(type, \"[\" + NAME + \"] requires 'type' field\");\n+ this.query = requireValue(query, \"[\" + NAME + \"] requires 'query' field\");\n this.score = score;\n this.innerHit = innerHit;\n- if (this.innerHit != null) {\n- this.innerHit.setParentChildType(type);\n- this.innerHit.setQuery(query);\n- }\n }\n \n /**\n@@ -114,24 +98,6 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n out.writeBoolean(ignoreUnmapped);\n }\n \n- /**\n- * Defines if the parent score is mapped into the child documents.\n- */\n- public HasParentQueryBuilder score(boolean score) {\n- this.score = score;\n- return this;\n- }\n-\n- /**\n- * Sets inner hit definition in the scope of this query and reusing the defined type and query.\n- */\n- public HasParentQueryBuilder innerHit(InnerHitBuilder innerHit) {\n- this.innerHit = Objects.requireNonNull(innerHit);\n- this.innerHit.setParentChildType(type);\n- this.innerHit.setQuery(query);\n- return this;\n- }\n-\n /**\n * Returns the query to execute.\n */\n@@ -160,6 +126,13 @@ public InnerHitBuilder innerHit() {\n return innerHit;\n }\n \n+ public HasParentQueryBuilder innerHit(InnerHitBuilder innerHit) {\n+ innerHit.setParentChildType(type);\n+ innerHit.setQuery(query);\n+ this.innerHit = innerHit;\n+ return this;\n+ }\n+\n /**\n * Sets whether the query builder should ignore unmapped types (and run a\n * {@link MatchNoDocsQuery} in place of this query) or throw an exception if\n@@ -264,7 +237,7 @@ public static HasParentQueryBuilder fromXContent(QueryParseContext parseContext)\n XContentParser parser = parseContext.parser();\n float boost = AbstractQueryBuilder.DEFAULT_BOOST;\n String parentType = null;\n- boolean score = HasParentQueryBuilder.DEFAULT_SCORE;\n+ boolean score = false;\n String queryName = null;\n InnerHitBuilder innerHits = null;\n boolean ignoreUnmapped = DEFAULT_IGNORE_UNMAPPED;\n@@ -323,7 +296,7 @@ protected boolean doEquals(HasParentQueryBuilder that) {\n return Objects.equals(query, that.query)\n && Objects.equals(type, that.type)\n && Objects.equals(score, that.score)\n- && Objects.equals(innerHit, that.innerHit) \n+ && Objects.equals(innerHit, that.innerHit)\n && Objects.equals(ignoreUnmapped, that.ignoreUnmapped);\n }\n \n@@ -336,10 +309,7 @@ protected int doHashCode() {\n protected QueryBuilder<?> doRewrite(QueryRewriteContext queryShardContext) throws IOException {\n QueryBuilder<?> rewrite = query.rewrite(queryShardContext);\n if (rewrite != query) {\n- HasParentQueryBuilder hasParentQueryBuilder = new HasParentQueryBuilder(type, rewrite);\n- hasParentQueryBuilder.score(score);\n- hasParentQueryBuilder.innerHit(innerHit);\n- return hasParentQueryBuilder;\n+ return new HasParentQueryBuilder(type, rewrite, score, innerHit);\n }\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/HasParentQueryBuilder.java", "status": "modified" }, { "diff": "@@ -44,12 +44,6 @@ public class NestedQueryBuilder extends AbstractQueryBuilder<NestedQueryBuilder>\n */\n public static final String NAME = \"nested\";\n public static final ParseField QUERY_NAME_FIELD = new ParseField(NAME);\n-\n- /**\n- * The default score mode for nested queries.\n- */\n- public static final ScoreMode DEFAULT_SCORE_MODE = ScoreMode.Avg;\n-\n /**\n * The default value for ignore_unmapped.\n */\n@@ -61,35 +55,21 @@ public class NestedQueryBuilder extends AbstractQueryBuilder<NestedQueryBuilder>\n private static final ParseField INNER_HITS_FIELD = new ParseField(\"inner_hits\");\n private static final ParseField IGNORE_UNMAPPED_FIELD = new ParseField(\"ignore_unmapped\");\n \n- private final QueryBuilder<?> query;\n-\n private final String path;\n-\n- private ScoreMode scoreMode = DEFAULT_SCORE_MODE;\n-\n+ private final ScoreMode scoreMode;\n+ private final QueryBuilder<?> query;\n private InnerHitBuilder innerHitBuilder;\n-\n private boolean ignoreUnmapped = DEFAULT_IGNORE_UNMAPPED;\n \n- public NestedQueryBuilder(String path, QueryBuilder<?> query) {\n- if (path == null) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires 'path' field\");\n- }\n- if (query == null) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires 'query' field\");\n- }\n- this.path = path;\n- this.query = query;\n+ public NestedQueryBuilder(String path, QueryBuilder query, ScoreMode scoreMode) {\n+ this(path, query, scoreMode, null);\n }\n \n- public NestedQueryBuilder(String path, QueryBuilder query, ScoreMode scoreMode, InnerHitBuilder innerHitBuilder) {\n- this(path, query);\n- scoreMode(scoreMode);\n+ private NestedQueryBuilder(String path, QueryBuilder query, ScoreMode scoreMode, InnerHitBuilder innerHitBuilder) {\n+ this.path = requireValue(path, \"[\" + NAME + \"] requires 'path' field\");\n+ this.query = requireValue(query, \"[\" + NAME + \"] requires 'query' field\");\n+ this.scoreMode = requireValue(scoreMode, \"[\" + NAME + \"] requires 'score_mode' field\");\n this.innerHitBuilder = innerHitBuilder;\n- if (this.innerHitBuilder != null) {\n- this.innerHitBuilder.setNestedPath(path);\n- this.innerHitBuilder.setQuery(query);\n- }\n }\n \n /**\n@@ -114,26 +94,34 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n }\n \n /**\n- * The score mode how the scores from the matching child documents are mapped into the nested parent document.\n+ * Returns the nested query to execute.\n */\n- public NestedQueryBuilder scoreMode(ScoreMode scoreMode) {\n- if (scoreMode == null) {\n- throw new IllegalArgumentException(\"[\" + NAME + \"] requires 'score_mode' field\");\n- }\n- this.scoreMode = scoreMode;\n- return this;\n+ public QueryBuilder query() {\n+ return query;\n }\n \n /**\n- * Sets inner hit definition in the scope of this nested query and reusing the defined path and query.\n+ * Returns inner hit definition in the scope of this query and reusing the defined type and query.\n */\n+\n+ public InnerHitBuilder innerHit() {\n+ return innerHitBuilder;\n+ }\n+\n public NestedQueryBuilder innerHit(InnerHitBuilder innerHit) {\n- this.innerHitBuilder = Objects.requireNonNull(innerHit);\n- this.innerHitBuilder.setNestedPath(path);\n- this.innerHitBuilder.setQuery(query);\n+ innerHit.setNestedPath(path);\n+ innerHit.setQuery(query);\n+ this.innerHitBuilder = innerHit;\n return this;\n }\n \n+ /**\n+ * Returns how the scores from the matching child documents are mapped into the nested parent document.\n+ */\n+ public ScoreMode scoreMode() {\n+ return scoreMode;\n+ }\n+\n /**\n * Sets whether the query builder should ignore unmapped paths (and run a\n * {@link MatchNoDocsQuery} in place of this query) or throw an exception if\n@@ -153,27 +141,6 @@ public boolean ignoreUnmapped() {\n return ignoreUnmapped;\n }\n \n- /**\n- * Returns the nested query to execute.\n- */\n- public QueryBuilder query() {\n- return query;\n- }\n-\n- /**\n- * Returns inner hit definition in the scope of this query and reusing the defined type and query.\n- */\n- public InnerHitBuilder innerHit() {\n- return innerHitBuilder;\n- }\n-\n- /**\n- * Returns how the scores from the matching child documents are mapped into the nested parent document.\n- */\n- public ScoreMode scoreMode() {\n- return scoreMode;\n- }\n-\n @Override\n protected void doXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject(NAME);\n@@ -194,7 +161,7 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep\n public static NestedQueryBuilder fromXContent(QueryParseContext parseContext) throws IOException {\n XContentParser parser = parseContext.parser();\n float boost = AbstractQueryBuilder.DEFAULT_BOOST;\n- ScoreMode scoreMode = NestedQueryBuilder.DEFAULT_SCORE_MODE;\n+ ScoreMode scoreMode = ScoreMode.Avg;\n String queryName = null;\n QueryBuilder query = null;\n String path = null;\n@@ -243,7 +210,7 @@ protected boolean doEquals(NestedQueryBuilder that) {\n return Objects.equals(query, that.query)\n && Objects.equals(path, that.path)\n && Objects.equals(scoreMode, that.scoreMode)\n- && Objects.equals(innerHitBuilder, that.innerHitBuilder) \n+ && Objects.equals(innerHitBuilder, that.innerHitBuilder)\n && Objects.equals(ignoreUnmapped, that.ignoreUnmapped);\n }\n \n@@ -294,7 +261,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n protected QueryBuilder<?> doRewrite(QueryRewriteContext queryRewriteContext) throws IOException {\n QueryBuilder rewrite = query.rewrite(queryRewriteContext);\n if (rewrite != query) {\n- return new NestedQueryBuilder(path, rewrite).scoreMode(scoreMode).innerHit(innerHitBuilder);\n+ return new NestedQueryBuilder(path, rewrite, scoreMode, innerHitBuilder);\n }\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java", "status": "modified" }, { "diff": "@@ -19,13 +19,15 @@\n \n package org.elasticsearch.index.query;\n \n+import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.geo.ShapeRelation;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n import org.elasticsearch.index.query.MoreLikeThisQueryBuilder.Item;\n import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder;\n import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilder;\n+import org.elasticsearch.index.query.support.InnerHitBuilder;\n import org.elasticsearch.index.search.MatchQuery;\n import org.elasticsearch.indices.TermsLookup;\n import org.elasticsearch.script.Script;\n@@ -483,25 +485,27 @@ public static MoreLikeThisQueryBuilder moreLikeThisQuery(Item[] likeItems) {\n }\n \n /**\n- * Constructs a new NON scoring child query, with the child type and the query to run on the child documents. The\n+ * Constructs a new has_child query, with the child type and the query to run on the child documents. The\n * results of this query are the parent docs that those child docs matched.\n *\n- * @param type The child type.\n- * @param query The query.\n+ * @param type The child type.\n+ * @param query The query.\n+ * @param scoreMode How the scores from the children hits should be aggregated into the parent hit.\n */\n- public static HasChildQueryBuilder hasChildQuery(String type, QueryBuilder query) {\n- return new HasChildQueryBuilder(type, query);\n+ public static HasChildQueryBuilder hasChildQuery(String type, QueryBuilder query, ScoreMode scoreMode) {\n+ return new HasChildQueryBuilder(type, query, scoreMode);\n }\n \n /**\n- * Constructs a new NON scoring parent query, with the parent type and the query to run on the parent documents. The\n+ * Constructs a new parent query, with the parent type and the query to run on the parent documents. The\n * results of this query are the children docs that those parent docs matched.\n *\n- * @param type The parent type.\n- * @param query The query.\n+ * @param type The parent type.\n+ * @param query The query.\n+ * @param score Whether the score from the parent hit should propogate to the child hit\n */\n- public static HasParentQueryBuilder hasParentQuery(String type, QueryBuilder query) {\n- return new HasParentQueryBuilder(type, query);\n+ public static HasParentQueryBuilder hasParentQuery(String type, QueryBuilder query, boolean score) {\n+ return new HasParentQueryBuilder(type, query, score);\n }\n \n /**\n@@ -512,8 +516,8 @@ public static ParentIdQueryBuilder parentId(String type, String id) {\n return new ParentIdQueryBuilder(type, id);\n }\n \n- public static NestedQueryBuilder nestedQuery(String path, QueryBuilder query) {\n- return new NestedQueryBuilder(path, query);\n+ public static NestedQueryBuilder nestedQuery(String path, QueryBuilder query, ScoreMode scoreMode) {\n+ return new NestedQueryBuilder(path, query, scoreMode);\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryBuilders.java", "status": "modified" }, { "diff": "@@ -19,9 +19,9 @@\n \n package org.elasticsearch.aliases;\n \n+import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n-import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequestBuilder;\n import org.elasticsearch.action.admin.indices.alias.exists.AliasesExistResponse;\n@@ -1056,8 +1056,8 @@ public void testAliasesFilterWithHasChildQuery() throws Exception {\n client().prepareIndex(\"my-index\", \"child\", \"2\").setSource(\"{}\").setParent(\"1\").get();\n refresh();\n \n- assertAcked(admin().indices().prepareAliases().addAlias(\"my-index\", \"filter1\", hasChildQuery(\"child\", matchAllQuery())));\n- assertAcked(admin().indices().prepareAliases().addAlias(\"my-index\", \"filter2\", hasParentQuery(\"parent\", matchAllQuery())));\n+ assertAcked(admin().indices().prepareAliases().addAlias(\"my-index\", \"filter1\", hasChildQuery(\"child\", matchAllQuery(), ScoreMode.None)));\n+ assertAcked(admin().indices().prepareAliases().addAlias(\"my-index\", \"filter2\", hasParentQuery(\"parent\", matchAllQuery(), false)));\n \n SearchResponse response = client().prepareSearch(\"filter1\").get();\n assertHitCount(response, 1);", "filename": "core/src/test/java/org/elasticsearch/aliases/IndexAliasesIT.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.cluster;\n \n+import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.discovery.MasterNotDiscoveredException;\n@@ -130,6 +131,6 @@ public void testAliasFilterValidation() throws Exception {\n internalCluster().startNode(settingsBuilder().put(Node.NODE_DATA_SETTING.getKey(), true).put(Node.NODE_MASTER_SETTING.getKey(), false));\n \n assertAcked(prepareCreate(\"test\").addMapping(\"type1\", \"{\\\"type1\\\" : {\\\"properties\\\" : {\\\"table_a\\\" : { \\\"type\\\" : \\\"nested\\\", \\\"properties\\\" : {\\\"field_a\\\" : { \\\"type\\\" : \\\"keyword\\\" },\\\"field_b\\\" :{ \\\"type\\\" : \\\"keyword\\\" }}}}}}\"));\n- client().admin().indices().prepareAliases().addAlias(\"test\", \"a_test\", QueryBuilders.nestedQuery(\"table_a\", QueryBuilders.termQuery(\"table_a.field_b\", \"y\"))).get();\n+ client().admin().indices().prepareAliases().addAlias(\"test\", \"a_test\", QueryBuilders.nestedQuery(\"table_a\", QueryBuilders.termQuery(\"table_a.field_b\", \"y\"), ScoreMode.Avg)).get();\n }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/SpecificMasterNodesIT.java", "status": "modified" }, { "diff": "@@ -120,13 +120,18 @@ public IndexFieldDataService fieldData() {\n protected HasChildQueryBuilder doCreateTestQueryBuilder() {\n int min = randomIntBetween(0, Integer.MAX_VALUE / 2);\n int max = randomIntBetween(min, Integer.MAX_VALUE);\n- return new HasChildQueryBuilder(CHILD_TYPE,\n- RandomQueryBuilder.createQuery(random()), max, min,\n- RandomPicks.randomFrom(random(), ScoreMode.values()),\n- randomBoolean() ? null : new InnerHitBuilder()\n- .setName(randomAsciiOfLengthBetween(1, 10))\n- .setSize(randomIntBetween(0, 100))\n- .addSort(new FieldSortBuilder(STRING_FIELD_NAME_2).order(SortOrder.ASC))).ignoreUnmapped(randomBoolean());\n+ HasChildQueryBuilder hqb = new HasChildQueryBuilder(CHILD_TYPE,\n+ RandomQueryBuilder.createQuery(random()),\n+ RandomPicks.randomFrom(random(), ScoreMode.values()));\n+ hqb.minMaxChildren(min, max);\n+ if (randomBoolean()) {\n+ hqb.innerHit(new InnerHitBuilder()\n+ .setName(randomAsciiOfLengthBetween(1, 10))\n+ .setSize(randomIntBetween(0, 100))\n+ .addSort(new FieldSortBuilder(STRING_FIELD_NAME_2).order(SortOrder.ASC)));\n+ }\n+ hqb.ignoreUnmapped(randomBoolean());\n+ return hqb;\n }\n \n @Override\n@@ -160,44 +165,26 @@ protected void doAssertLuceneQuery(HasChildQueryBuilder queryBuilder, Query quer\n \n public void testIllegalValues() {\n QueryBuilder<?> query = RandomQueryBuilder.createQuery(random());\n- try {\n- new HasChildQueryBuilder(null, query);\n- fail(\"must not be null\");\n- } catch (IllegalArgumentException ex) {\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> QueryBuilders.hasChildQuery(null, query, ScoreMode.None));\n+ assertEquals(\"[has_child] requires 'type' field\", e.getMessage());\n \n- }\n+ e = expectThrows(IllegalArgumentException.class, () -> QueryBuilders.hasChildQuery(\"foo\", null, ScoreMode.None));\n+ assertEquals(\"[has_child] requires 'query' field\", e.getMessage());\n \n- try {\n- new HasChildQueryBuilder(\"foo\", null);\n- fail(\"must not be null\");\n- } catch (IllegalArgumentException ex) {\n+ e = expectThrows(IllegalArgumentException.class, () -> QueryBuilders.hasChildQuery(\"foo\", query, null));\n+ assertEquals(\"[has_child] requires 'score_mode' field\", e.getMessage());\n \n- }\n- HasChildQueryBuilder foo = new HasChildQueryBuilder(\"foo\", query);// all good\n- try {\n- foo.scoreMode(null);\n- fail(\"must not be null\");\n- } catch (IllegalArgumentException ex) {\n+ int positiveValue = randomIntBetween(0, Integer.MAX_VALUE);\n+ HasChildQueryBuilder foo = QueryBuilders.hasChildQuery(\"foo\", query, ScoreMode.None); // all good\n+ e = expectThrows(IllegalArgumentException.class, () -> foo.minMaxChildren(randomIntBetween(Integer.MIN_VALUE, -1), positiveValue));\n+ assertEquals(\"[has_child] requires non-negative 'min_children' field\", e.getMessage());\n \n- }\n- final int positiveValue = randomIntBetween(0, Integer.MAX_VALUE);\n- try {\n- foo.minChildren(randomIntBetween(Integer.MIN_VALUE, -1));\n- fail(\"must not be negative\");\n- } catch (IllegalArgumentException ex) {\n-\n- }\n- foo.minChildren(positiveValue);\n- assertEquals(positiveValue, foo.minChildren());\n- try {\n- foo.maxChildren(randomIntBetween(Integer.MIN_VALUE, -1));\n- fail(\"must not be negative\");\n- } catch (IllegalArgumentException ex) {\n-\n- }\n+ e = expectThrows(IllegalArgumentException.class, () -> foo.minMaxChildren(positiveValue, randomIntBetween(Integer.MIN_VALUE, -1)));\n+ assertEquals(\"[has_child] requires non-negative 'max_children' field\", e.getMessage());\n \n- foo.maxChildren(positiveValue);\n- assertEquals(positiveValue, foo.maxChildren());\n+ e = expectThrows(IllegalArgumentException.class, () -> foo.minMaxChildren(positiveValue, positiveValue - 10));\n+ assertEquals(\"[has_child] 'max_children' is less than 'min_children'\", e.getMessage());\n }\n \n public void testFromJson() throws IOException {\n@@ -269,7 +256,7 @@ public void testToQueryInnerQueryType() throws IOException {\n String[] searchTypes = new String[]{PARENT_TYPE};\n QueryShardContext shardContext = createShardContext();\n shardContext.setTypes(searchTypes);\n- HasChildQueryBuilder hasChildQueryBuilder = new HasChildQueryBuilder(CHILD_TYPE, new IdsQueryBuilder().addIds(\"id\"));\n+ HasChildQueryBuilder hasChildQueryBuilder = QueryBuilders.hasChildQuery(CHILD_TYPE, new IdsQueryBuilder().addIds(\"id\"), ScoreMode.None);\n Query query = hasChildQueryBuilder.toQuery(shardContext);\n //verify that the context types are still the same as the ones we previously set\n assertThat(shardContext.getTypes(), equalTo(searchTypes));\n@@ -335,7 +322,7 @@ public void testUnknownObjectException() throws IOException {\n \n public void testNonDefaultSimilarity() throws Exception {\n QueryShardContext shardContext = createShardContext();\n- HasChildQueryBuilder hasChildQueryBuilder = new HasChildQueryBuilder(CHILD_TYPE, new TermQueryBuilder(\"custom_string\", \"value\"));\n+ HasChildQueryBuilder hasChildQueryBuilder = QueryBuilders.hasChildQuery(CHILD_TYPE, new TermQueryBuilder(\"custom_string\", \"value\"), ScoreMode.None);\n HasChildQueryBuilder.LateParsingQuery query = (HasChildQueryBuilder.LateParsingQuery) hasChildQueryBuilder.toQuery(shardContext);\n Similarity expected = SimilarityService.BUILT_IN.get(similarity).apply(similarity, Settings.EMPTY).get();\n assertThat(((PerFieldSimilarityWrapper) query.getSimilarity()).get(\"custom_string\"), instanceOf(expected.getClass()));\n@@ -391,13 +378,13 @@ public void testThatUnrecognizedFromStringThrowsException() {\n }\n \n public void testIgnoreUnmapped() throws IOException {\n- final HasChildQueryBuilder queryBuilder = new HasChildQueryBuilder(\"unmapped\", new MatchAllQueryBuilder());\n+ final HasChildQueryBuilder queryBuilder = new HasChildQueryBuilder(\"unmapped\", new MatchAllQueryBuilder(), ScoreMode.None);\n queryBuilder.ignoreUnmapped(true);\n Query query = queryBuilder.toQuery(queryShardContext());\n assertThat(query, notNullValue());\n assertThat(query, instanceOf(MatchNoDocsQuery.class));\n \n- final HasChildQueryBuilder failingQueryBuilder = new HasChildQueryBuilder(\"unmapped\", new MatchAllQueryBuilder());\n+ final HasChildQueryBuilder failingQueryBuilder = new HasChildQueryBuilder(\"unmapped\", new MatchAllQueryBuilder(), ScoreMode.None);\n failingQueryBuilder.ignoreUnmapped(false);\n QueryShardException e = expectThrows(QueryShardException.class, () -> failingQueryBuilder.toQuery(queryShardContext()));\n assertThat(e.getMessage(), containsString(\"[\" + HasChildQueryBuilder.NAME + \"] no mapping found for type [unmapped]\"));", "filename": "core/src/test/java/org/elasticsearch/index/query/HasChildQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -107,12 +107,16 @@ public IndexFieldDataService fieldData() {\n */\n @Override\n protected HasParentQueryBuilder doCreateTestQueryBuilder() {\n- return new HasParentQueryBuilder(PARENT_TYPE,\n- RandomQueryBuilder.createQuery(random()),randomBoolean(),\n- randomBoolean() ? null : new InnerHitBuilder()\n- .setName(randomAsciiOfLengthBetween(1, 10))\n- .setSize(randomIntBetween(0, 100))\n- .addSort(new FieldSortBuilder(STRING_FIELD_NAME_2).order(SortOrder.ASC))).ignoreUnmapped(randomBoolean());\n+ HasParentQueryBuilder hqb = new HasParentQueryBuilder(PARENT_TYPE,\n+ RandomQueryBuilder.createQuery(random()),randomBoolean());\n+ if (randomBoolean()) {\n+ hqb.innerHit(new InnerHitBuilder()\n+ .setName(randomAsciiOfLengthBetween(1, 10))\n+ .setSize(randomIntBetween(0, 100))\n+ .addSort(new FieldSortBuilder(STRING_FIELD_NAME_2).order(SortOrder.ASC)));\n+ }\n+ hqb.ignoreUnmapped(randomBoolean());\n+ return hqb;\n }\n \n @Override\n@@ -144,25 +148,18 @@ protected void doAssertLuceneQuery(HasParentQueryBuilder queryBuilder, Query que\n \n public void testIllegalValues() throws IOException {\n QueryBuilder query = RandomQueryBuilder.createQuery(random());\n- try {\n- new HasParentQueryBuilder(null, query);\n- fail(\"must not be null\");\n- } catch (IllegalArgumentException ex) {\n- }\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> QueryBuilders.hasParentQuery(null, query, false));\n+ assertThat(e.getMessage(), equalTo(\"[has_parent] requires 'type' field\"));\n \n- try {\n- new HasParentQueryBuilder(\"foo\", null);\n- fail(\"must not be null\");\n- } catch (IllegalArgumentException ex) {\n- }\n+ e = expectThrows(IllegalArgumentException.class,\n+ () -> QueryBuilders.hasParentQuery(\"foo\", null, false));\n+ assertThat(e.getMessage(), equalTo(\"[has_parent] requires 'query' field\"));\n \n QueryShardContext context = createShardContext();\n- HasParentQueryBuilder queryBuilder = new HasParentQueryBuilder(\"just_a_type\", new MatchAllQueryBuilder());\n- try {\n- queryBuilder.doToQuery(context);\n- } catch (QueryShardException e) {\n- assertThat(e.getMessage(), equalTo(\"[has_parent] no child types found for type [just_a_type]\"));\n- }\n+ HasParentQueryBuilder qb = QueryBuilders.hasParentQuery(\"just_a_type\", new MatchAllQueryBuilder(), false);\n+ QueryShardException qse = expectThrows(QueryShardException.class, () -> qb.doToQuery(context));\n+ assertThat(qse.getMessage(), equalTo(\"[has_parent] no child types found for type [just_a_type]\"));\n }\n \n public void testDeprecatedXContent() throws IOException {\n@@ -210,7 +207,8 @@ public void testToQueryInnerQueryType() throws IOException {\n String[] searchTypes = new String[]{CHILD_TYPE};\n QueryShardContext shardContext = createShardContext();\n shardContext.setTypes(searchTypes);\n- HasParentQueryBuilder hasParentQueryBuilder = new HasParentQueryBuilder(PARENT_TYPE, new IdsQueryBuilder().addIds(\"id\"));\n+ HasParentQueryBuilder hasParentQueryBuilder = new HasParentQueryBuilder(PARENT_TYPE, new IdsQueryBuilder().addIds(\"id\"),\n+ false);\n Query query = hasParentQueryBuilder.toQuery(shardContext);\n //verify that the context types are still the same as the ones we previously set\n assertThat(shardContext.getTypes(), equalTo(searchTypes));\n@@ -271,13 +269,13 @@ public void testFromJson() throws IOException {\n }\n \n public void testIgnoreUnmapped() throws IOException {\n- final HasParentQueryBuilder queryBuilder = new HasParentQueryBuilder(\"unmapped\", new MatchAllQueryBuilder());\n+ final HasParentQueryBuilder queryBuilder = new HasParentQueryBuilder(\"unmapped\", new MatchAllQueryBuilder(), false);\n queryBuilder.ignoreUnmapped(true);\n Query query = queryBuilder.toQuery(queryShardContext());\n assertThat(query, notNullValue());\n assertThat(query, instanceOf(MatchNoDocsQuery.class));\n \n- final HasParentQueryBuilder failingQueryBuilder = new HasParentQueryBuilder(\"unmapped\", new MatchAllQueryBuilder());\n+ final HasParentQueryBuilder failingQueryBuilder = new HasParentQueryBuilder(\"unmapped\", new MatchAllQueryBuilder(), false);\n failingQueryBuilder.ignoreUnmapped(false);\n QueryShardException e = expectThrows(QueryShardException.class, () -> failingQueryBuilder.toQuery(queryShardContext()));\n assertThat(e.getMessage(),", "filename": "core/src/test/java/org/elasticsearch/index/query/HasParentQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -86,12 +86,16 @@ public IndexFieldDataService fieldData() {\n */\n @Override\n protected NestedQueryBuilder doCreateTestQueryBuilder() {\n- return new NestedQueryBuilder(\"nested1\", RandomQueryBuilder.createQuery(random()),\n- RandomPicks.randomFrom(random(), ScoreMode.values()),\n- SearchContext.current() == null ? null : new InnerHitBuilder()\n- .setName(randomAsciiOfLengthBetween(1, 10))\n- .setSize(randomIntBetween(0, 100))\n- .addSort(new FieldSortBuilder(STRING_FIELD_NAME).order(SortOrder.ASC))).ignoreUnmapped(randomBoolean());\n+ NestedQueryBuilder nqb = new NestedQueryBuilder(\"nested1\", RandomQueryBuilder.createQuery(random()),\n+ RandomPicks.randomFrom(random(), ScoreMode.values()));\n+ if (SearchContext.current() != null) {\n+ nqb.innerHit(new InnerHitBuilder()\n+ .setName(randomAsciiOfLengthBetween(1, 10))\n+ .setSize(randomIntBetween(0, 100))\n+ .addSort(new FieldSortBuilder(STRING_FIELD_NAME).order(SortOrder.ASC)));\n+ }\n+ nqb.ignoreUnmapped(randomBoolean());\n+ return nqb;\n }\n \n @Override\n@@ -121,27 +125,16 @@ protected void doAssertLuceneQuery(NestedQueryBuilder queryBuilder, Query query,\n }\n \n public void testValidate() {\n- try {\n- new NestedQueryBuilder(null, new MatchAllQueryBuilder());\n- fail(\"cannot be null\");\n- } catch (IllegalArgumentException e) {\n- // expected\n- }\n+ QueryBuilder<?> innerQuery = RandomQueryBuilder.createQuery(random());\n+ IllegalArgumentException e =\n+ expectThrows(IllegalArgumentException.class, () -> QueryBuilders.nestedQuery(null, innerQuery, ScoreMode.Avg));\n+ assertThat(e.getMessage(), equalTo(\"[nested] requires 'path' field\"));\n \n- try {\n- new NestedQueryBuilder(\"path\", null);\n- fail(\"cannot be null\");\n- } catch (IllegalArgumentException e) {\n- // expected\n- }\n+ e = expectThrows(IllegalArgumentException.class, () -> QueryBuilders.nestedQuery(\"foo\", null, ScoreMode.Avg));\n+ assertThat(e.getMessage(), equalTo(\"[nested] requires 'query' field\"));\n \n- NestedQueryBuilder nestedQueryBuilder = new NestedQueryBuilder(\"path\", new MatchAllQueryBuilder());\n- try {\n- nestedQueryBuilder.scoreMode(null);\n- fail(\"cannot be null\");\n- } catch (IllegalArgumentException e) {\n- // expected\n- }\n+ e = expectThrows(IllegalArgumentException.class, () -> QueryBuilders.nestedQuery(\"foo\", innerQuery, null));\n+ assertThat(e.getMessage(), equalTo(\"[nested] requires 'score_mode' field\"));\n }\n \n public void testFromJson() throws IOException {\n@@ -193,13 +186,13 @@ public void testFromJson() throws IOException {\n }\n \n public void testIgnoreUnmapped() throws IOException {\n- final NestedQueryBuilder queryBuilder = new NestedQueryBuilder(\"unmapped\", new MatchAllQueryBuilder());\n+ final NestedQueryBuilder queryBuilder = new NestedQueryBuilder(\"unmapped\", new MatchAllQueryBuilder(), ScoreMode.None);\n queryBuilder.ignoreUnmapped(true);\n Query query = queryBuilder.toQuery(queryShardContext());\n assertThat(query, notNullValue());\n assertThat(query, instanceOf(MatchNoDocsQuery.class));\n \n- final NestedQueryBuilder failingQueryBuilder = new NestedQueryBuilder(\"unmapped\", new MatchAllQueryBuilder());\n+ final NestedQueryBuilder failingQueryBuilder = new NestedQueryBuilder(\"unmapped\", new MatchAllQueryBuilder(), ScoreMode.None);\n failingQueryBuilder.ignoreUnmapped(false);\n IllegalStateException e = expectThrows(IllegalStateException.class, () -> failingQueryBuilder.toQuery(queryShardContext()));\n assertThat(e.getMessage(), containsString(\"[\" + NestedQueryBuilder.NAME + \"] failed to find nested object under path [unmapped]\"));", "filename": "core/src/test/java/org/elasticsearch/index/query/NestedQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -205,15 +205,15 @@ public void testGeoHashCell() {\n public void testHasChild() {\n hasChildQuery(\n \"blog_tag\",\n- termQuery(\"tag\",\"something\")\n- );\n+ termQuery(\"tag\",\"something\"),\n+ ScoreMode.None);\n }\n \n public void testHasParent() {\n hasParentQuery(\n \"blog\",\n- termQuery(\"tag\",\"something\")\n- );\n+ termQuery(\"tag\",\"something\"),\n+ false);\n }\n \n public void testIds() {\n@@ -262,9 +262,8 @@ public void testNested() {\n \"obj1\",\n boolQuery()\n .must(matchQuery(\"obj1.name\", \"blue\"))\n- .must(rangeQuery(\"obj1.count\").gt(5))\n- )\n- .scoreMode(ScoreMode.Avg);\n+ .must(rangeQuery(\"obj1.count\").gt(5)),\n+ ScoreMode.Avg);\n }\n \n public void testPrefix() {", "filename": "core/src/test/java/org/elasticsearch/index/query/QueryDSLDocumentationTests.java", "status": "modified" }, { "diff": "@@ -396,7 +396,7 @@ void initNestedIndexAndPercolation() throws IOException {\n ensureGreen(\"nestedindex\");\n \n client().prepareIndex(\"nestedindex\", PercolatorFieldMapper.TYPE_NAME, \"Q\").setSource(jsonBuilder().startObject()\n- .field(\"query\", QueryBuilders.nestedQuery(\"employee\", QueryBuilders.matchQuery(\"employee.name\", \"virginia potts\").operator(Operator.AND)).scoreMode(ScoreMode.Avg)).endObject()).get();\n+ .field(\"query\", QueryBuilders.nestedQuery(\"employee\", QueryBuilders.matchQuery(\"employee.name\", \"virginia potts\").operator(Operator.AND), ScoreMode.Avg)).endObject()).get();\n \n refresh();\n ", "filename": "core/src/test/java/org/elasticsearch/percolator/MultiPercolatorIT.java", "status": "modified" }, { "diff": "@@ -1564,7 +1564,7 @@ void initNestedIndexAndPercolation() throws IOException {\n ensureGreen(\"nestedindex\");\n \n client().prepareIndex(\"nestedindex\", PercolatorFieldMapper.TYPE_NAME, \"Q\").setSource(jsonBuilder().startObject()\n- .field(\"query\", QueryBuilders.nestedQuery(\"employee\", QueryBuilders.matchQuery(\"employee.name\", \"virginia potts\").operator(Operator.AND)).scoreMode(ScoreMode.Avg)).endObject()).get();\n+ .field(\"query\", QueryBuilders.nestedQuery(\"employee\", QueryBuilders.matchQuery(\"employee.name\", \"virginia potts\").operator(Operator.AND), ScoreMode.Avg)).endObject()).get();\n \n refresh();\n \n@@ -1788,7 +1788,7 @@ public void testFailNicelyWithInnerHits() throws Exception {\n assertAcked(prepareCreate(\"index\").addMapping(\"mapping\", mapping));\n try {\n client().prepareIndex(\"index\", PercolatorFieldMapper.TYPE_NAME, \"1\")\n- .setSource(jsonBuilder().startObject().field(\"query\", nestedQuery(\"nested\", matchQuery(\"nested.name\", \"value\")).innerHit(new InnerHitBuilder())).endObject())\n+ .setSource(jsonBuilder().startObject().field(\"query\", nestedQuery(\"nested\", matchQuery(\"nested.name\", \"value\"), ScoreMode.Avg).innerHit(new InnerHitBuilder())).endObject())\n .execute().actionGet();\n fail(\"Expected a parse error, because inner_hits isn't supported in the percolate api\");\n } catch (Exception e) {\n@@ -1803,7 +1803,7 @@ public void testParentChild() throws Exception {\n \n assertAcked(prepareCreate(\"index\").addMapping(\"child\", \"_parent\", \"type=parent\").addMapping(\"parent\"));\n client().prepareIndex(\"index\", PercolatorFieldMapper.TYPE_NAME, \"1\")\n- .setSource(jsonBuilder().startObject().field(\"query\", hasChildQuery(\"child\", matchAllQuery())).endObject())\n+ .setSource(jsonBuilder().startObject().field(\"query\", hasChildQuery(\"child\", matchAllQuery(), ScoreMode.None)).endObject())\n .execute().actionGet();\n }\n ", "filename": "core/src/test/java/org/elasticsearch/percolator/PercolatorIT.java", "status": "modified" }, { "diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.search.aggregations.bucket;\n \n+import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.update.UpdateResponse;\n@@ -319,7 +320,7 @@ public void testPostCollection() throws Exception {\n indexRandom(true, requests);\n \n SearchResponse response = client().prepareSearch(indexName).setTypes(masterType)\n- .setQuery(hasChildQuery(childType, termQuery(\"color\", \"orange\")))\n+ .setQuery(hasChildQuery(childType, termQuery(\"color\", \"orange\"), ScoreMode.None))\n .addAggregation(children(\"my-refinements\", childType)\n .subAggregation(terms(\"my-colors\").field(\"color\"))\n .subAggregation(terms(\"my-sizes\").field(\"size\"))", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/ChildrenIT.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.aggregations.metrics;\n \n import org.apache.lucene.search.Explanation;\n+import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.util.ArrayUtil;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n@@ -816,7 +817,7 @@ public void testNestedFetchFeatures() {\n \n SearchResponse searchResponse = client()\n .prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"comment\").queryName(\"test\")))\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"comment\").queryName(\"test\"), ScoreMode.Avg))\n .addAggregation(\n nested(\"to-comments\", \"comments\").subAggregation(\n topHits(\"top-comments\").size(1).highlighter(new HighlightBuilder().field(hlField)).explain(true)", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/TopHitsIT.java", "status": "modified" }, { "diff": "@@ -62,6 +62,7 @@\n import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n import static org.elasticsearch.index.query.QueryBuilders.hasParentQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n import static org.elasticsearch.index.query.QueryBuilders.idsQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n@@ -87,9 +88,6 @@\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.notNullValue;\n \n-/**\n- *\n- */\n @ClusterScope(scope = Scope.SUITE)\n public class ChildQuerySearchIT extends ESIntegTestCase {\n \n@@ -134,32 +132,33 @@ public void testMultiLevelChild() throws Exception {\n .filter(hasChildQuery(\n \"child\",\n boolQuery().must(termQuery(\"c_field\", \"c_value1\"))\n- .filter(hasChildQuery(\"grandchild\", termQuery(\"gc_field\", \"gc_value1\")))))).get();\n+ .filter(hasChildQuery(\"grandchild\", termQuery(\"gc_field\", \"gc_value1\"), ScoreMode.None))\n+ , ScoreMode.None))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"p1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value1\")))).execute()\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value1\"), false))).execute()\n .actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"c1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"child\", termQuery(\"c_field\", \"c_value1\")))).execute()\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"child\", termQuery(\"c_field\", \"c_value1\"), false))).execute()\n .actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"gc1\"));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value1\"))).execute()\n+ searchResponse = client().prepareSearch(\"test\").setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value1\"), false)).execute()\n .actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"c1\"));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(hasParentQuery(\"child\", termQuery(\"c_field\", \"c_value1\"))).execute()\n+ searchResponse = client().prepareSearch(\"test\").setQuery(hasParentQuery(\"child\", termQuery(\"c_field\", \"c_value1\"), false)).execute()\n .actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n@@ -177,8 +176,9 @@ public void test2744() throws IOException {\n client().prepareIndex(\"test\", \"foo\", \"1\").setSource(\"foo\", 1).get();\n client().prepareIndex(\"test\", \"test\").setSource(\"foo\", 1).setParent(\"1\").get();\n refresh();\n- SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"test\", matchQuery(\"foo\", 1))).execute()\n- .actionGet();\n+ SearchResponse searchResponse = client().prepareSearch(\"test\").\n+ setQuery(hasChildQuery(\"test\", matchQuery(\"foo\", 1), ScoreMode.None))\n+ .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"1\"));\n@@ -287,11 +287,11 @@ public void testCachingBugWithFqueryFilter() throws Exception {\n for (int i = 1; i <= 10; i++) {\n logger.info(\"Round {}\", i);\n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasChildQuery(\"child\", matchAllQuery()).scoreMode(ScoreMode.Max)))\n+ .setQuery(constantScoreQuery(hasChildQuery(\"child\", matchAllQuery(), ScoreMode.Max)))\n .get();\n assertNoFailures(searchResponse);\n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasParentQuery(\"parent\", matchAllQuery()).score(true)))\n+ .setQuery(constantScoreQuery(hasParentQuery(\"parent\", matchAllQuery(), true)))\n .get();\n assertNoFailures(searchResponse);\n }\n@@ -305,7 +305,7 @@ public void testHasParentFilter() throws Exception {\n Map<String, Set<String>> parentToChildren = new HashMap<>();\n // Childless parent\n client().prepareIndex(\"test\", \"parent\", \"p0\").setSource(\"p_field\", \"p0\").get();\n- parentToChildren.put(\"p0\", new HashSet<String>());\n+ parentToChildren.put(\"p0\", new HashSet<>());\n \n String previousParentId = null;\n int numChildDocs = 32;\n@@ -332,7 +332,7 @@ public void testHasParentFilter() throws Exception {\n assertThat(parentToChildren.isEmpty(), equalTo(false));\n for (Map.Entry<String, Set<String>> parentToChildrenEntry : parentToChildren.entrySet()) {\n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", parentToChildrenEntry.getKey()))))\n+ .setQuery(constantScoreQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", parentToChildrenEntry.getKey()), false)))\n .setSize(numChildDocsPerParent).get();\n \n assertNoFailures(searchResponse);\n@@ -369,39 +369,45 @@ public void testSimpleChildQueryWithFlush() throws Exception {\n \n // HAS CHILD QUERY\n \n- SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\"))).execute()\n- .actionGet();\n+ SearchResponse searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\"), ScoreMode.None))\n+ .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"p1\"));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\"))).execute()\n- .actionGet();\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\"), ScoreMode.None))\n+ .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"p2\"));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"red\"))).get();\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"red\"), ScoreMode.None))\n+ .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(2L));\n assertThat(searchResponse.getHits().getAt(0).id(), anyOf(equalTo(\"p2\"), equalTo(\"p1\")));\n assertThat(searchResponse.getHits().getAt(1).id(), anyOf(equalTo(\"p2\"), equalTo(\"p1\")));\n \n // HAS CHILD FILTER\n-\n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\")))).get();\n+ .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\"), ScoreMode.None)))\n+ .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"p1\"));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\"))))\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\"), ScoreMode.None)))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"p2\"));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"red\"))))\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"red\"), ScoreMode.None)))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(2L));\n@@ -427,7 +433,7 @@ public void testScopedFacet() throws Exception {\n \n SearchResponse searchResponse = client()\n .prepareSearch(\"test\")\n- .setQuery(hasChildQuery(\"child\", boolQuery().should(termQuery(\"c_field\", \"red\")).should(termQuery(\"c_field\", \"yellow\"))))\n+ .setQuery(hasChildQuery(\"child\", boolQuery().should(termQuery(\"c_field\", \"red\")).should(termQuery(\"c_field\", \"yellow\")), ScoreMode.None))\n .addAggregation(AggregationBuilders.global(\"global\").subAggregation(\n AggregationBuilders.filter(\"filter\", boolQuery().should(termQuery(\"c_field\", \"red\")).should(termQuery(\"c_field\", \"yellow\"))).subAggregation(\n AggregationBuilders.terms(\"facet1\").field(\"c_field\")))).get();\n@@ -462,7 +468,7 @@ public void testDeletedParent() throws Exception {\n refresh();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\")))).get();\n+ .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\"), ScoreMode.None))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"p1\"));\n@@ -474,7 +480,7 @@ public void testDeletedParent() throws Exception {\n client().admin().indices().prepareRefresh().get();\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\")))).get();\n+ .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\"), ScoreMode.None))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"p1\"));\n@@ -498,11 +504,12 @@ public void testDfsSearchType() throws Exception {\n refresh();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\").setSearchType(SearchType.DFS_QUERY_THEN_FETCH)\n- .setQuery(boolQuery().mustNot(hasChildQuery(\"child\", boolQuery().should(queryStringQuery(\"c_field:*\"))))).get();\n+ .setQuery(boolQuery().mustNot(hasChildQuery(\"child\", boolQuery().should(queryStringQuery(\"c_field:*\")), ScoreMode.None)))\n+ .get();\n assertNoFailures(searchResponse);\n \n searchResponse = client().prepareSearch(\"test\").setSearchType(SearchType.DFS_QUERY_THEN_FETCH)\n- .setQuery(boolQuery().mustNot(hasParentQuery(\"parent\", boolQuery().should(queryStringQuery(\"p_field:*\"))))).execute()\n+ .setQuery(boolQuery().mustNot(hasParentQuery(\"parent\", boolQuery().should(queryStringQuery(\"p_field:*\")), false))).execute()\n .actionGet();\n assertNoFailures(searchResponse);\n }\n@@ -521,12 +528,12 @@ public void testHasChildAndHasParentFailWhenSomeSegmentsDontContainAnyParentOrCh\n client().admin().indices().prepareFlush(\"test\").get();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchAllQuery()))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchAllQuery(), ScoreMode.None))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", matchAllQuery()))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", matchAllQuery(), false))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n }\n@@ -542,19 +549,21 @@ public void testCountApiUsage() throws Exception {\n client().prepareIndex(\"test\", \"child\", \"c1\").setSource(\"c_field\", \"1\").setParent(parentId).get();\n refresh();\n \n- SearchResponse countResponse = client().prepareSearch(\"test\").setSize(0).setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\")).scoreMode(ScoreMode.Max))\n+ SearchResponse countResponse = client().prepareSearch(\"test\").setSize(0)\n+ .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"), ScoreMode.Max))\n .get();\n assertHitCount(countResponse, 1L);\n \n- countResponse = client().prepareSearch(\"test\").setSize(0).setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\")).score(true))\n+ countResponse = client().prepareSearch(\"test\").setSize(0).setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\"), true))\n .get();\n assertHitCount(countResponse, 1L);\n \n- countResponse = client().prepareSearch(\"test\").setSize(0).setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"))))\n+ countResponse = client().prepareSearch(\"test\").setSize(0)\n+ .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"), ScoreMode.None)))\n .get();\n assertHitCount(countResponse, 1L);\n \n- countResponse = client().prepareSearch(\"test\").setSize(0).setQuery(constantScoreQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\"))))\n+ countResponse = client().prepareSearch(\"test\").setSize(0).setQuery(constantScoreQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\"), false)))\n .get();\n assertHitCount(countResponse, 1L);\n }\n@@ -572,20 +581,20 @@ public void testExplainUsage() throws Exception {\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n .setExplain(true)\n- .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\")).scoreMode(ScoreMode.Max))\n+ .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"), ScoreMode.Max))\n .get();\n assertHitCount(searchResponse, 1L);\n assertThat(searchResponse.getHits().getAt(0).explanation().getDescription(), containsString(\"join value p1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n .setExplain(true)\n- .setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\")).score(true))\n+ .setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\"), true))\n .get();\n assertHitCount(searchResponse, 1L);\n assertThat(searchResponse.getHits().getAt(0).explanation().getDescription(), containsString(\"join value p1\"));\n \n ExplainResponse explainResponse = client().prepareExplain(\"test\", \"parent\", parentId)\n- .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\")).scoreMode(ScoreMode.Max))\n+ .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"), ScoreMode.Max))\n .get();\n assertThat(explainResponse.isExists(), equalTo(true));\n assertThat(explainResponse.getExplanation().getDetails()[0].getDescription(), containsString(\"join value p1\"));\n@@ -662,7 +671,7 @@ public void testScoreForParentChildQueriesWithFunctionScore() throws Exception {\n \"child\",\n QueryBuilders.functionScoreQuery(matchQuery(\"c_field2\", 0),\n fieldValueFactorFunction(\"c_field1\"))\n- .boostMode(CombineFunction.REPLACE)).scoreMode(ScoreMode.Total)).get();\n+ .boostMode(CombineFunction.REPLACE), ScoreMode.Total)).get();\n \n assertThat(response.getHits().totalHits(), equalTo(3L));\n assertThat(response.getHits().hits()[0].id(), equalTo(\"1\"));\n@@ -679,7 +688,7 @@ public void testScoreForParentChildQueriesWithFunctionScore() throws Exception {\n \"child\",\n QueryBuilders.functionScoreQuery(matchQuery(\"c_field2\", 0),\n fieldValueFactorFunction(\"c_field1\"))\n- .boostMode(CombineFunction.REPLACE)).scoreMode(ScoreMode.Max)).get();\n+ .boostMode(CombineFunction.REPLACE), ScoreMode.Max)).get();\n \n assertThat(response.getHits().totalHits(), equalTo(3L));\n assertThat(response.getHits().hits()[0].id(), equalTo(\"3\"));\n@@ -696,7 +705,7 @@ public void testScoreForParentChildQueriesWithFunctionScore() throws Exception {\n \"child\",\n QueryBuilders.functionScoreQuery(matchQuery(\"c_field2\", 0),\n fieldValueFactorFunction(\"c_field1\"))\n- .boostMode(CombineFunction.REPLACE)).scoreMode(ScoreMode.Avg)).get();\n+ .boostMode(CombineFunction.REPLACE), ScoreMode.Avg)).get();\n \n assertThat(response.getHits().totalHits(), equalTo(3L));\n assertThat(response.getHits().hits()[0].id(), equalTo(\"3\"));\n@@ -713,7 +722,7 @@ public void testScoreForParentChildQueriesWithFunctionScore() throws Exception {\n \"parent\",\n QueryBuilders.functionScoreQuery(matchQuery(\"p_field1\", \"p_value3\"),\n fieldValueFactorFunction(\"p_field2\"))\n- .boostMode(CombineFunction.REPLACE)).score(true))\n+ .boostMode(CombineFunction.REPLACE), true))\n .addSort(SortBuilders.fieldSort(\"c_field3\")).addSort(SortBuilders.scoreSort()).get();\n \n assertThat(response.getHits().totalHits(), equalTo(7L));\n@@ -741,27 +750,27 @@ public void testParentChildQueriesCanHandleNoRelevantTypesInIndex() throws Excep\n ensureGreen();\n \n SearchResponse response = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"text\", \"value\"))).get();\n+ .setQuery(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"text\", \"value\"), ScoreMode.None)).get();\n assertNoFailures(response);\n assertThat(response.getHits().totalHits(), equalTo(0L));\n \n client().prepareIndex(\"test\", \"child1\").setSource(jsonBuilder().startObject().field(\"text\", \"value\").endObject()).setRefresh(true)\n .get();\n \n- response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"text\", \"value\"))).get();\n+ response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"text\", \"value\"), ScoreMode.None)).get();\n assertNoFailures(response);\n assertThat(response.getHits().totalHits(), equalTo(0L));\n \n- response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"text\", \"value\")).scoreMode(ScoreMode.Max))\n+ response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"text\", \"value\"), ScoreMode.Max))\n .get();\n assertNoFailures(response);\n assertThat(response.getHits().totalHits(), equalTo(0L));\n \n- response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasParentQuery(\"parent\", matchQuery(\"text\", \"value\"))).get();\n+ response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasParentQuery(\"parent\", matchQuery(\"text\", \"value\"), false)).get();\n assertNoFailures(response);\n assertThat(response.getHits().totalHits(), equalTo(0L));\n \n- response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasParentQuery(\"parent\", matchQuery(\"text\", \"value\")).score(true))\n+ response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasParentQuery(\"parent\", matchQuery(\"text\", \"value\"), true))\n .get();\n assertNoFailures(response);\n assertThat(response.getHits().totalHits(), equalTo(0L));\n@@ -781,13 +790,14 @@ public void testHasChildAndHasParentFilter_withFilter() throws Exception {\n client().admin().indices().prepareFlush(\"test\").get();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", termQuery(\"c_field\", 1)))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", termQuery(\"c_field\", 1), ScoreMode.None)))\n+ .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().hits()[0].id(), equalTo(\"1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", termQuery(\"p_field\", 1)))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", termQuery(\"p_field\", 1), false))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().hits()[0].id(), equalTo(\"2\"));\n@@ -806,19 +816,21 @@ public void testHasChildAndHasParentWrappedInAQueryFilter() throws Exception {\n refresh();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchQuery(\"c_field\", 1)))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchQuery(\"c_field\", 1), ScoreMode.None)))\n+ .get();\n assertSearchHit(searchResponse, 1, hasId(\"1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", matchQuery(\"p_field\", 1)))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", matchQuery(\"p_field\", 1), false))).get();\n assertSearchHit(searchResponse, 1, hasId(\"2\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery().must(hasChildQuery(\"child\", matchQuery(\"c_field\", 1))))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery().must(hasChildQuery(\"child\", matchQuery(\"c_field\", 1), ScoreMode.None))))\n+ .get();\n assertSearchHit(searchResponse, 1, hasId(\"1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery().must(hasParentQuery(\"parent\", matchQuery(\"p_field\", 1))))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery().must(hasParentQuery(\"parent\", matchQuery(\"p_field\", 1), false)))).get();\n assertSearchHit(searchResponse, 1, hasId(\"2\"));\n }\n \n@@ -845,7 +857,8 @@ public void testSimpleQueryRewrite() throws Exception {\n SearchType[] searchTypes = new SearchType[]{SearchType.QUERY_THEN_FETCH, SearchType.DFS_QUERY_THEN_FETCH};\n for (SearchType searchType : searchTypes) {\n SearchResponse searchResponse = client().prepareSearch(\"test\").setSearchType(searchType)\n- .setQuery(hasChildQuery(\"child\", prefixQuery(\"c_field\", \"c\")).scoreMode(ScoreMode.Max)).addSort(\"p_field\", SortOrder.ASC)\n+ .setQuery(hasChildQuery(\"child\", prefixQuery(\"c_field\", \"c\"), ScoreMode.Max))\n+ .addSort(\"p_field\", SortOrder.ASC)\n .setSize(5).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(10L));\n@@ -856,7 +869,7 @@ public void testSimpleQueryRewrite() throws Exception {\n assertThat(searchResponse.getHits().hits()[4].id(), equalTo(\"p004\"));\n \n searchResponse = client().prepareSearch(\"test\").setSearchType(searchType)\n- .setQuery(hasParentQuery(\"parent\", prefixQuery(\"p_field\", \"p\")).score(true)).addSort(\"c_field\", SortOrder.ASC)\n+ .setQuery(hasParentQuery(\"parent\", prefixQuery(\"p_field\", \"p\"), true)).addSort(\"c_field\", SortOrder.ASC)\n .setSize(5).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(500L));\n@@ -886,7 +899,7 @@ public void testReIndexingParentAndChildDocuments() throws Exception {\n refresh();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\")).scoreMode(ScoreMode.Total)).get();\n+ .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\"), ScoreMode.Total)).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"p1\"));\n@@ -896,7 +909,7 @@ public void testReIndexingParentAndChildDocuments() throws Exception {\n .prepareSearch(\"test\")\n .setQuery(\n boolQuery().must(matchQuery(\"c_field\", \"x\")).must(\n- hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value2\")).score(true))).get();\n+ hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value2\"), true))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(2L));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"c3\"));\n@@ -911,7 +924,7 @@ public void testReIndexingParentAndChildDocuments() throws Exception {\n client().admin().indices().prepareRefresh(\"test\").get();\n }\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\")).scoreMode(ScoreMode.Total))\n+ searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"yellow\"), ScoreMode.Total))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n@@ -922,7 +935,7 @@ public void testReIndexingParentAndChildDocuments() throws Exception {\n .prepareSearch(\"test\")\n .setQuery(\n boolQuery().must(matchQuery(\"c_field\", \"x\")).must(\n- hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value2\")).score(true))).get();\n+ hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value2\"), true))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(2L));\n assertThat(searchResponse.getHits().getAt(0).id(), Matchers.anyOf(equalTo(\"c3\"), equalTo(\"c4\")));\n@@ -945,7 +958,7 @@ public void testHasChildQueryWithMinimumScore() throws Exception {\n client().prepareIndex(\"test\", \"child\", \"c5\").setSource(\"c_field\", \"x\").setParent(\"p2\").get();\n refresh();\n \n- SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"child\", matchAllQuery()).scoreMode(ScoreMode.Total))\n+ SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"child\", matchAllQuery(), ScoreMode.Total))\n .setMinScore(3) // Score needs to be 3 or above!\n .get();\n assertNoFailures(searchResponse);\n@@ -1035,7 +1048,7 @@ public void testHasChildNotBeingCached() throws IOException {\n client().admin().indices().prepareRefresh(\"test\").get();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\"))))\n+ .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\"), ScoreMode.None)))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n@@ -1044,7 +1057,7 @@ public void testHasChildNotBeingCached() throws IOException {\n client().admin().indices().prepareRefresh(\"test\").get();\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\"))))\n+ .setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\"), ScoreMode.None)))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(2L));\n@@ -1053,24 +1066,24 @@ public void testHasChildNotBeingCached() throws IOException {\n private QueryBuilder randomHasChild(String type, String field, String value) {\n if (randomBoolean()) {\n if (randomBoolean()) {\n- return constantScoreQuery(hasChildQuery(type, termQuery(field, value)));\n+ return constantScoreQuery(hasChildQuery(type, termQuery(field, value), ScoreMode.None));\n } else {\n- return boolQuery().must(matchAllQuery()).filter(hasChildQuery(type, termQuery(field, value)));\n+ return boolQuery().must(matchAllQuery()).filter(hasChildQuery(type, termQuery(field, value), ScoreMode.None));\n }\n } else {\n- return hasChildQuery(type, termQuery(field, value));\n+ return hasChildQuery(type, termQuery(field, value), ScoreMode.None);\n }\n }\n \n private QueryBuilder randomHasParent(String type, String field, String value) {\n if (randomBoolean()) {\n if (randomBoolean()) {\n- return constantScoreQuery(hasParentQuery(type, termQuery(field, value)));\n+ return constantScoreQuery(hasParentQuery(type, termQuery(field, value), false));\n } else {\n- return boolQuery().must(matchAllQuery()).filter(hasParentQuery(type, termQuery(field, value)));\n+ return boolQuery().must(matchAllQuery()).filter(hasParentQuery(type, termQuery(field, value), false));\n }\n } else {\n- return hasParentQuery(type, termQuery(field, value));\n+ return hasParentQuery(type, termQuery(field, value), false);\n }\n }\n \n@@ -1101,10 +1114,10 @@ public void testHasChildQueryOnlyReturnsSingleChildType() {\n \"child_type_one\",\n boolQuery().must(\n queryStringQuery(\"name:William*\").analyzeWildcard(true)\n- )\n- )\n- )\n- )\n+ ),\n+ ScoreMode.None)\n+ ),\n+ ScoreMode.None)\n )\n ).get();\n assertHitCount(searchResponse, 1L);\n@@ -1118,10 +1131,10 @@ public void testHasChildQueryOnlyReturnsSingleChildType() {\n \"child_type_two\",\n boolQuery().must(\n queryStringQuery(\"name:William*\").analyzeWildcard(true)\n- )\n- )\n- )\n- )\n+ ),\n+ ScoreMode.None)\n+ ),\n+ ScoreMode.None)\n )\n ).get();\n assertHitCount(searchResponse, 0L);\n@@ -1203,13 +1216,13 @@ public void testHasChildQueryWithNestedInnerObjects() throws Exception {\n \n ScoreMode scoreMode = randomFrom(ScoreMode.values());\n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(QueryBuilders.hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\")).scoreMode(scoreMode)).filter(boolQuery().mustNot(termQuery(\"p_field\", \"3\"))))\n+ .setQuery(boolQuery().must(QueryBuilders.hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\"), scoreMode)).filter(boolQuery().mustNot(termQuery(\"p_field\", \"3\"))))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(boolQuery().must(QueryBuilders.hasChildQuery(\"child\", termQuery(\"c_field\", \"red\")).scoreMode(scoreMode)).filter(boolQuery().mustNot(termQuery(\"p_field\", \"3\"))))\n+ .setQuery(boolQuery().must(QueryBuilders.hasChildQuery(\"child\", termQuery(\"c_field\", \"red\"), scoreMode)).filter(boolQuery().mustNot(termQuery(\"p_field\", \"3\"))))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(2L));\n@@ -1226,25 +1239,25 @@ public void testNamedFilters() throws Exception {\n client().prepareIndex(\"test\", \"child\", \"c1\").setSource(\"c_field\", \"1\").setParent(parentId).get();\n refresh();\n \n- SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\")).scoreMode(ScoreMode.Max).queryName(\"test\"))\n+ SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"), ScoreMode.Max).queryName(\"test\"))\n .get();\n assertHitCount(searchResponse, 1L);\n assertThat(searchResponse.getHits().getAt(0).getMatchedQueries().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).getMatchedQueries()[0], equalTo(\"test\"));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\")).score(true).queryName(\"test\"))\n+ searchResponse = client().prepareSearch(\"test\").setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\"), true).queryName(\"test\"))\n .get();\n assertHitCount(searchResponse, 1L);\n assertThat(searchResponse.getHits().getAt(0).getMatchedQueries().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).getMatchedQueries()[0], equalTo(\"test\"));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\")).queryName(\"test\")))\n+ searchResponse = client().prepareSearch(\"test\").setQuery(constantScoreQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"), ScoreMode.None).queryName(\"test\")))\n .get();\n assertHitCount(searchResponse, 1L);\n assertThat(searchResponse.getHits().getAt(0).getMatchedQueries().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).getMatchedQueries()[0], equalTo(\"test\"));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(constantScoreQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\")).queryName(\"test\")))\n+ searchResponse = client().prepareSearch(\"test\").setQuery(constantScoreQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\"), false).queryName(\"test\")))\n .get();\n assertHitCount(searchResponse, 1L);\n assertThat(searchResponse.getHits().getAt(0).getMatchedQueries().length, equalTo(1));\n@@ -1264,7 +1277,7 @@ public void testParentChildQueriesNoParentType() throws Exception {\n \n try {\n client().prepareSearch(\"test\")\n- .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\")))\n+ .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"), ScoreMode.None))\n .get();\n fail();\n } catch (SearchPhaseExecutionException e) {\n@@ -1273,7 +1286,7 @@ public void testParentChildQueriesNoParentType() throws Exception {\n \n try {\n client().prepareSearch(\"test\")\n- .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\")).scoreMode(ScoreMode.Max))\n+ .setQuery(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"), ScoreMode.Max))\n .get();\n fail();\n } catch (SearchPhaseExecutionException e) {\n@@ -1282,7 +1295,7 @@ public void testParentChildQueriesNoParentType() throws Exception {\n \n try {\n client().prepareSearch(\"test\")\n- .setPostFilter(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\")))\n+ .setPostFilter(hasChildQuery(\"child\", termQuery(\"c_field\", \"1\"), ScoreMode.None))\n .get();\n fail();\n } catch (SearchPhaseExecutionException e) {\n@@ -1291,7 +1304,7 @@ public void testParentChildQueriesNoParentType() throws Exception {\n \n try {\n client().prepareSearch(\"test\")\n- .setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\")).score(true))\n+ .setQuery(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\"), true))\n .get();\n fail();\n } catch (SearchPhaseExecutionException e) {\n@@ -1300,7 +1313,7 @@ public void testParentChildQueriesNoParentType() throws Exception {\n \n try {\n client().prepareSearch(\"test\")\n- .setPostFilter(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\")))\n+ .setPostFilter(hasParentQuery(\"parent\", termQuery(\"p_field\", \"1\"), false))\n .get();\n fail();\n } catch (SearchPhaseExecutionException e) {\n@@ -1360,7 +1373,7 @@ public void testParentChildCaching() throws Exception {\n for (int i = 0; i < 2; i++) {\n SearchResponse searchResponse = client().prepareSearch()\n .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery()\n- .must(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"c_field\", \"red\")))\n+ .must(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"c_field\", \"red\"), ScoreMode.None))\n .must(matchAllQuery())))\n .get();\n assertThat(searchResponse.getHits().totalHits(), equalTo(2L));\n@@ -1372,7 +1385,7 @@ public void testParentChildCaching() throws Exception {\n \n SearchResponse searchResponse = client().prepareSearch()\n .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery()\n- .must(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"c_field\", \"red\")))\n+ .must(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"c_field\", \"red\"), ScoreMode.None))\n .must(matchAllQuery())))\n .get();\n \n@@ -1392,10 +1405,10 @@ public void testParentChildQueriesViaScrollApi() throws Exception {\n refresh();\n \n QueryBuilder[] queries = new QueryBuilder[]{\n- hasChildQuery(\"child\", matchAllQuery()),\n- boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchAllQuery())),\n- hasParentQuery(\"parent\", matchAllQuery()),\n- boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", matchAllQuery()))\n+ hasChildQuery(\"child\", matchAllQuery(), ScoreMode.None),\n+ boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchAllQuery(), ScoreMode.None)),\n+ hasParentQuery(\"parent\", matchAllQuery(), false),\n+ boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", matchAllQuery(), false))\n };\n \n for (QueryBuilder query : queries) {\n@@ -1436,7 +1449,7 @@ public void testQueryBeforeChildType() throws Exception {\n \n SearchResponse resp;\n resp = client().prepareSearch(\"test\")\n- .setSource(new SearchSourceBuilder().query(QueryBuilders.hasChildQuery(\"posts\", QueryBuilders.matchQuery(\"field\", \"bar\"))))\n+ .setSource(new SearchSourceBuilder().query(QueryBuilders.hasChildQuery(\"posts\", QueryBuilders.matchQuery(\"field\", \"bar\"), ScoreMode.None)))\n .get();\n assertHitCount(resp, 1L);\n }\n@@ -1473,22 +1486,22 @@ public void testTypeIsAppliedInHasParentInnerQuery() throws Exception {\n indexRandom(true, indexRequests);\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasParentQuery(\"parent\", boolQuery().mustNot(termQuery(\"field1\", \"a\")))))\n+ .setQuery(constantScoreQuery(hasParentQuery(\"parent\", boolQuery().mustNot(termQuery(\"field1\", \"a\")), false)))\n .get();\n assertHitCount(searchResponse, 0L);\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(hasParentQuery(\"parent\", constantScoreQuery(boolQuery().mustNot(termQuery(\"field1\", \"a\")))))\n+ .setQuery(hasParentQuery(\"parent\", constantScoreQuery(boolQuery().mustNot(termQuery(\"field1\", \"a\"))), false))\n .get();\n assertHitCount(searchResponse, 0L);\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(constantScoreQuery(hasParentQuery(\"parent\", termQuery(\"field1\", \"a\"))))\n+ .setQuery(constantScoreQuery(hasParentQuery(\"parent\", termQuery(\"field1\", \"a\"), false)))\n .get();\n assertHitCount(searchResponse, 2L);\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(hasParentQuery(\"parent\", constantScoreQuery(termQuery(\"field1\", \"a\"))))\n+ .setQuery(hasParentQuery(\"parent\", constantScoreQuery(termQuery(\"field1\", \"a\")), false))\n .get();\n assertHitCount(searchResponse, 2L);\n }\n@@ -1538,11 +1551,8 @@ private SearchResponse minMaxQuery(ScoreMode scoreMode, int minChildren, Integer\n new FunctionScoreQueryBuilder.FilterFunctionBuilder(weightFactorFunction(1)),\n new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.termQuery(\"foo\", \"three\"), weightFactorFunction(1)),\n new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.termQuery(\"foo\", \"four\"), weightFactorFunction(1))\n- }).boostMode(CombineFunction.REPLACE).scoreMode(FiltersFunctionScoreQuery.ScoreMode.SUM)).scoreMode(scoreMode).minChildren(minChildren);\n-\n- if (maxChildren != null) {\n- hasChildQuery.maxChildren(maxChildren);\n- }\n+ }).boostMode(CombineFunction.REPLACE).scoreMode(FiltersFunctionScoreQuery.ScoreMode.SUM), scoreMode)\n+ .minMaxChildren(minChildren, maxChildren != null ? maxChildren : HasChildQueryBuilder.DEFAULT_MAX_CHILDREN);\n \n return client()\n .prepareSearch(\"test\")\n@@ -1632,12 +1642,8 @@ public void testMinMaxChildren() throws Exception {\n assertThat(response.getHits().hits()[0].id(), equalTo(\"3\"));\n assertThat(response.getHits().hits()[0].score(), equalTo(1f));\n \n- try {\n- response = minMaxQuery(ScoreMode.None, 3, 2);\n- fail();\n- } catch (SearchPhaseExecutionException e) {\n- assertThat(e.toString(), containsString(\"[has_child] 'max_children' is less than 'min_children'\"));\n- }\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> minMaxQuery(ScoreMode.None, 3, 2));\n+ assertThat(e.getMessage(), equalTo(\"[has_child] 'max_children' is less than 'min_children'\"));\n \n // Score mode = SUM\n response = minMaxQuery(ScoreMode.Total, 0, null);\n@@ -1712,12 +1718,8 @@ public void testMinMaxChildren() throws Exception {\n assertThat(response.getHits().hits()[0].id(), equalTo(\"3\"));\n assertThat(response.getHits().hits()[0].score(), equalTo(3f));\n \n- try {\n- response = minMaxQuery(ScoreMode.Total, 3, 2);\n- fail();\n- } catch (SearchPhaseExecutionException e) {\n- assertThat(e.toString(), containsString(\"[has_child] 'max_children' is less than 'min_children'\"));\n- }\n+ e = expectThrows(IllegalArgumentException.class, () -> minMaxQuery(ScoreMode.Total, 3, 2));\n+ assertThat(e.getMessage(), equalTo(\"[has_child] 'max_children' is less than 'min_children'\"));\n \n // Score mode = MAX\n response = minMaxQuery(ScoreMode.Max, 0, null);\n@@ -1792,12 +1794,8 @@ public void testMinMaxChildren() throws Exception {\n assertThat(response.getHits().hits()[0].id(), equalTo(\"3\"));\n assertThat(response.getHits().hits()[0].score(), equalTo(2f));\n \n- try {\n- response = minMaxQuery(ScoreMode.Max, 3, 2);\n- fail();\n- } catch (SearchPhaseExecutionException e) {\n- assertThat(e.toString(), containsString(\"[has_child] 'max_children' is less than 'min_children'\"));\n- }\n+ e = expectThrows(IllegalArgumentException.class, () -> minMaxQuery(ScoreMode.Max, 3, 2));\n+ assertThat(e.getMessage(), equalTo(\"[has_child] 'max_children' is less than 'min_children'\"));\n \n // Score mode = AVG\n response = minMaxQuery(ScoreMode.Avg, 0, null);\n@@ -1872,12 +1870,8 @@ public void testMinMaxChildren() throws Exception {\n assertThat(response.getHits().hits()[0].id(), equalTo(\"3\"));\n assertThat(response.getHits().hits()[0].score(), equalTo(1.5f));\n \n- try {\n- response = minMaxQuery(ScoreMode.Avg, 3, 2);\n- fail();\n- } catch (SearchPhaseExecutionException e) {\n- assertThat(e.toString(), containsString(\"[has_child] 'max_children' is less than 'min_children'\"));\n- }\n+ e = expectThrows(IllegalArgumentException.class, () -> minMaxQuery(ScoreMode.Avg, 3, 2));\n+ assertThat(e.getMessage(), equalTo(\"[has_child] 'max_children' is less than 'min_children'\"));\n }\n \n public void testParentFieldToNonExistingType() {\n@@ -1888,26 +1882,21 @@ public void testParentFieldToNonExistingType() {\n \n try {\n client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.hasChildQuery(\"child\", matchAllQuery()))\n+ .setQuery(QueryBuilders.hasChildQuery(\"child\", matchAllQuery(), ScoreMode.None))\n .get();\n fail();\n } catch (SearchPhaseExecutionException e) {\n }\n }\n \n- static HasChildQueryBuilder hasChildQuery(String type, QueryBuilder queryBuilder) {\n- HasChildQueryBuilder hasChildQueryBuilder = QueryBuilders.hasChildQuery(type, queryBuilder);\n- return hasChildQueryBuilder;\n- }\n-\n public void testHasParentInnerQueryType() {\n assertAcked(prepareCreate(\"test\").addMapping(\"parent-type\").addMapping(\"child-type\", \"_parent\", \"type=parent-type\"));\n client().prepareIndex(\"test\", \"child-type\", \"child-id\").setParent(\"parent-id\").setSource(\"{}\").get();\n client().prepareIndex(\"test\", \"parent-type\", \"parent-id\").setSource(\"{}\").get();\n refresh();\n //make sure that when we explicitly set a type, the inner query is executed in the context of the parent type instead\n SearchResponse searchResponse = client().prepareSearch(\"test\").setTypes(\"child-type\").setQuery(\n- QueryBuilders.hasParentQuery(\"parent-type\", new IdsQueryBuilder().addIds(\"parent-id\"))).get();\n+ QueryBuilders.hasParentQuery(\"parent-type\", new IdsQueryBuilder().addIds(\"parent-id\"), false)).get();\n assertSearchHits(searchResponse, \"child-id\");\n }\n \n@@ -1918,7 +1907,7 @@ public void testHasChildInnerQueryType() {\n refresh();\n //make sure that when we explicitly set a type, the inner query is executed in the context of the child type instead\n SearchResponse searchResponse = client().prepareSearch(\"test\").setTypes(\"parent-type\").setQuery(\n- QueryBuilders.hasChildQuery(\"child-type\", new IdsQueryBuilder().addIds(\"child-id\"))).get();\n+ QueryBuilders.hasChildQuery(\"child-type\", new IdsQueryBuilder().addIds(\"child-id\"), ScoreMode.None)).get();\n assertSearchHits(searchResponse, \"parent-id\");\n }\n }", "filename": "core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.innerhits;\n \n+import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.util.ArrayUtil;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchRequest;\n@@ -118,9 +119,9 @@ public void testSimpleNested() throws Exception {\n );\n // Inner hits can be defined in two ways: 1) with the query 2) as separate inner_hit definition\n SearchRequest[] searchRequests = new SearchRequest[]{\n- client().prepareSearch(\"articles\").setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"))\n- .innerHit(new InnerHitBuilder().setName(\"comment\"))).request(),\n- client().prepareSearch(\"articles\").setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")))\n+ client().prepareSearch(\"articles\").setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg).innerHit(\n+ new InnerHitBuilder().setName(\"comment\"))).request(),\n+ client().prepareSearch(\"articles\").setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg))\n .innerHits(innerHitsBuilder).request()\n };\n for (SearchRequest searchRequest : searchRequests) {\n@@ -148,12 +149,12 @@ public void testSimpleNested() throws Exception {\n // separate inner_hit definition\n searchRequests = new SearchRequest[] {\n client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\")))\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\"), ScoreMode.Avg))\n .innerHits(innerHitsBuilder).request(),\n client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\")).innerHit(new InnerHitBuilder().setName(\"comment\"))).request(),\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\"), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"comment\"))).request(),\n client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\")).innerHit(new InnerHitBuilder().setName(\"comment\").addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)))).request()\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\"), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"comment\").addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)))).request()\n };\n for (SearchRequest searchRequest : searchRequests) {\n SearchResponse response = client().search(searchRequest).actionGet();\n@@ -187,10 +188,10 @@ public void testSimpleNested() throws Exception {\n innerHitsBuilder.addInnerHit(\"comments\", innerHit);\n searchRequests = new SearchRequest[] {\n client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")))\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg))\n .innerHits(innerHitsBuilder).request(),\n client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg).innerHit(\n new InnerHitBuilder().setHighlightBuilder(new HighlightBuilder().field(\"comments.message\"))\n .setExplain(true)\n .addFieldDataField(\"comments.message\")\n@@ -252,14 +253,14 @@ public void testRandomNested() throws Exception {\n } else {\n BoolQueryBuilder boolQuery = new BoolQueryBuilder();\n if (randomBoolean()) {\n- boolQuery.should(nestedQuery(\"field1\", matchAllQuery()).innerHit(new InnerHitBuilder().setName(\"a\").setSize(size)\n+ boolQuery.should(nestedQuery(\"field1\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"a\").setSize(size)\n .addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC))));\n- boolQuery.should(nestedQuery(\"field2\", matchAllQuery()).innerHit(new InnerHitBuilder().setName(\"b\")\n+ boolQuery.should(nestedQuery(\"field2\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"b\")\n .addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)).setSize(size)));\n } else {\n- boolQuery.should(constantScoreQuery(nestedQuery(\"field1\", matchAllQuery()).innerHit(new InnerHitBuilder().setName(\"a\")\n+ boolQuery.should(constantScoreQuery(nestedQuery(\"field1\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"a\")\n .setSize(size).addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)))));\n- boolQuery.should(constantScoreQuery(nestedQuery(\"field2\", matchAllQuery()).innerHit(new InnerHitBuilder().setName(\"b\")\n+ boolQuery.should(constantScoreQuery(nestedQuery(\"field2\", matchAllQuery(), ScoreMode.Avg).innerHit(new InnerHitBuilder().setName(\"b\")\n .setSize(size).addSort(new FieldSortBuilder(\"_doc\").order(SortOrder.DESC)))));\n }\n searchResponse = client().prepareSearch(\"idx\")\n@@ -317,11 +318,11 @@ public void testSimpleParentChild() throws Exception {\n .setQuery(matchQuery(\"message\", \"fox\")));\n SearchRequest[] searchRequests = new SearchRequest[]{\n client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\")))\n+ .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None))\n .innerHits(innerHitsBuilder)\n .request(),\n client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\")).innerHit(new InnerHitBuilder().setName(\"comment\")))\n+ .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"comment\")))\n .request()\n };\n for (SearchRequest searchRequest : searchRequests) {\n@@ -346,11 +347,11 @@ public void testSimpleParentChild() throws Exception {\n .setQuery(matchQuery(\"message\", \"elephant\")));\n searchRequests = new SearchRequest[] {\n client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"elephant\")))\n+ .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"elephant\"), ScoreMode.None))\n .innerHits(innerHitsBuilder)\n .request(),\n client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"elephant\")).innerHit(new InnerHitBuilder()))\n+ .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"elephant\"), ScoreMode.None).innerHit(new InnerHitBuilder()))\n .request()\n };\n for (SearchRequest searchRequest : searchRequests) {\n@@ -382,13 +383,13 @@ public void testSimpleParentChild() throws Exception {\n innerHitsBuilder.addInnerHit(\"comment\", innerHit);\n searchRequests = new SearchRequest[] {\n client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\")))\n+ .setQuery(hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None))\n .innerHits(innerHitsBuilder)\n .request(),\n \n client().prepareSearch(\"articles\")\n .setQuery(\n- hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\")).innerHit(\n+ hasChildQuery(\"comment\", matchQuery(\"message\", \"fox\"), ScoreMode.None).innerHit(\n new InnerHitBuilder()\n .addFieldDataField(\"message\")\n .setHighlightBuilder(new HighlightBuilder().field(\"message\"))\n@@ -455,11 +456,11 @@ public void testRandomParentChild() throws Exception {\n } else {\n BoolQueryBuilder boolQuery = new BoolQueryBuilder();\n if (randomBoolean()) {\n- boolQuery.should(hasChildQuery(\"child1\", matchAllQuery()).innerHit(new InnerHitBuilder().setName(\"a\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size)));\n- boolQuery.should(hasChildQuery(\"child2\", matchAllQuery()).innerHit(new InnerHitBuilder().setName(\"b\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size)));\n+ boolQuery.should(hasChildQuery(\"child1\", matchAllQuery(), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"a\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size)));\n+ boolQuery.should(hasChildQuery(\"child2\", matchAllQuery(), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"b\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size)));\n } else {\n- boolQuery.should(constantScoreQuery(hasChildQuery(\"child1\", matchAllQuery()).innerHit(new InnerHitBuilder().setName(\"a\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size))));\n- boolQuery.should(constantScoreQuery(hasChildQuery(\"child2\", matchAllQuery()).innerHit(new InnerHitBuilder().setName(\"b\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size))));\n+ boolQuery.should(constantScoreQuery(hasChildQuery(\"child1\", matchAllQuery(), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"a\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size))));\n+ boolQuery.should(constantScoreQuery(hasChildQuery(\"child2\", matchAllQuery(), ScoreMode.None).innerHit(new InnerHitBuilder().setName(\"b\").addSort(new FieldSortBuilder(\"_uid\").order(SortOrder.ASC)).setSize(size))));\n }\n searchResponse = client().prepareSearch(\"idx\")\n .setSize(numDocs)\n@@ -523,7 +524,7 @@ public void testInnerHitsOnHasParent() throws Exception {\n .setQuery(\n boolQuery()\n .must(matchQuery(\"body\", \"fail2ban\"))\n- .must(hasParentQuery(\"question\", matchAllQuery()).innerHit(new InnerHitBuilder()))\n+ .must(hasParentQuery(\"question\", matchAllQuery(), false).innerHit(new InnerHitBuilder()))\n ).get();\n assertNoFailures(response);\n assertHitCount(response, 2);\n@@ -567,10 +568,10 @@ public void testParentChildMultipleLayers() throws Exception {\n InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n .setParentChildType(\"comment\")\n- .setQuery(hasChildQuery(\"remark\", matchQuery(\"message\", \"good\")))\n+ .setQuery(hasChildQuery(\"remark\", matchQuery(\"message\", \"good\"), ScoreMode.None))\n .setInnerHitsBuilder(innerInnerHitsBuilder));\n SearchResponse response = client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", hasChildQuery(\"remark\", matchQuery(\"message\", \"good\"))))\n+ .setQuery(hasChildQuery(\"comment\", hasChildQuery(\"remark\", matchQuery(\"message\", \"good\"), ScoreMode.None), ScoreMode.None))\n .innerHits(innerHitsBuilder)\n .get();\n \n@@ -596,10 +597,10 @@ public void testParentChildMultipleLayers() throws Exception {\n innerHitsBuilder = new InnerHitsBuilder();\n innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n .setParentChildType(\"comment\")\n- .setQuery(hasChildQuery(\"remark\", matchQuery(\"message\", \"bad\")))\n+ .setQuery(hasChildQuery(\"remark\", matchQuery(\"message\", \"bad\"), ScoreMode.None))\n .setInnerHitsBuilder(innerInnerHitsBuilder));\n response = client().prepareSearch(\"articles\")\n- .setQuery(hasChildQuery(\"comment\", hasChildQuery(\"remark\", matchQuery(\"message\", \"bad\"))))\n+ .setQuery(hasChildQuery(\"comment\", hasChildQuery(\"remark\", matchQuery(\"message\", \"bad\"), ScoreMode.None), ScoreMode.None))\n .innerHits(innerHitsBuilder)\n .get();\n \n@@ -668,11 +669,11 @@ public void testNestedMultipleLayers() throws Exception {\n InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n .setNestedPath(\"comments\")\n- .setQuery(nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"good\")))\n+ .setQuery(nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"good\"), ScoreMode.Avg))\n .setInnerHitsBuilder(innerInnerHitsBuilder)\n );\n SearchResponse response = client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"good\"))))\n+ .setQuery(nestedQuery(\"comments\", nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"good\"), ScoreMode.Avg), ScoreMode.Avg))\n .innerHits(innerHitsBuilder).get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n@@ -695,7 +696,7 @@ public void testNestedMultipleLayers() throws Exception {\n \n // Directly refer to the second level:\n response = client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"bad\")).innerHit(new InnerHitBuilder()))\n+ .setQuery(nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"bad\"), ScoreMode.Avg).innerHit(new InnerHitBuilder()))\n .get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n@@ -717,10 +718,10 @@ public void testNestedMultipleLayers() throws Exception {\n innerHitsBuilder = new InnerHitsBuilder();\n innerHitsBuilder.addInnerHit(\"comment\", new InnerHitBuilder()\n .setNestedPath(\"comments\")\n- .setQuery(nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"bad\")))\n+ .setQuery(nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"bad\"), ScoreMode.Avg))\n .setInnerHitsBuilder(innerInnerHitsBuilder));\n response = client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"bad\"))))\n+ .setQuery(nestedQuery(\"comments\", nestedQuery(\"comments.remarks\", matchQuery(\"comments.remarks.message\", \"bad\"), ScoreMode.Avg), ScoreMode.Avg))\n .innerHits(innerHitsBuilder)\n .get();\n assertNoFailures(response);\n@@ -755,7 +756,7 @@ public void testNestedDefinedAsObject() throws Exception {\n indexRandom(true, requests);\n \n SearchResponse response = client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new InnerHitBuilder()))\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.Avg).innerHit(new InnerHitBuilder()))\n .get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n@@ -795,7 +796,7 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n indexRandom(true, requests);\n \n SearchResponse response = client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"fox\")).innerHit(new InnerHitBuilder()))\n+ .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"fox\"), ScoreMode.Avg).innerHit(new InnerHitBuilder()))\n .get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n@@ -807,7 +808,7 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild(), nullValue());\n \n response = client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"bear\")).innerHit(new InnerHitBuilder()))\n+ .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"bear\"), ScoreMode.Avg).innerHit(new InnerHitBuilder()))\n .get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n@@ -826,7 +827,7 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n .endObject()));\n indexRandom(true, requests);\n response = client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"fox\")).innerHit(new InnerHitBuilder()))\n+ .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"fox\"), ScoreMode.Avg).innerHit(new InnerHitBuilder()))\n .get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n@@ -987,8 +988,8 @@ public void testMatchesQueriesNestedInnerHits() throws Exception {\n .setQuery(nestedQuery(\"nested1\", boolQuery()\n .should(termQuery(\"nested1.n_field1\", \"n_value1_1\").queryName(\"test1\"))\n .should(termQuery(\"nested1.n_field1\", \"n_value1_3\").queryName(\"test2\"))\n- .should(termQuery(\"nested1.n_field2\", \"n_value2_2\").queryName(\"test3\"))\n- ).innerHit(new InnerHitBuilder().addSort(new FieldSortBuilder(\"nested1.n_field1\").order(SortOrder.ASC))))\n+ .should(termQuery(\"nested1.n_field2\", \"n_value2_2\").queryName(\"test3\")),\n+ ScoreMode.Avg).innerHit(new InnerHitBuilder().addSort(new FieldSortBuilder(\"nested1.n_field1\").order(SortOrder.ASC))))\n .setSize(numDocs)\n .addSort(\"field1\", SortOrder.ASC)\n .get();\n@@ -1027,7 +1028,7 @@ public void testMatchesQueriesParentChildInnerHits() throws Exception {\n indexRandom(true, requests);\n \n SearchResponse response = client().prepareSearch(\"index\")\n- .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value1\").queryName(\"_name1\")).innerHit(new InnerHitBuilder()))\n+ .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value1\").queryName(\"_name1\"), ScoreMode.None).innerHit(new InnerHitBuilder()))\n .addSort(\"_uid\", SortOrder.ASC)\n .get();\n assertHitCount(response, 2);\n@@ -1042,7 +1043,7 @@ public void testMatchesQueriesParentChildInnerHits() throws Exception {\n assertThat(response.getHits().getAt(1).getInnerHits().get(\"child\").getAt(0).getMatchedQueries()[0], equalTo(\"_name1\"));\n \n response = client().prepareSearch(\"index\")\n- .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value2\").queryName(\"_name2\")).innerHit(new InnerHitBuilder()))\n+ .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value2\").queryName(\"_name2\"), ScoreMode.None).innerHit(new InnerHitBuilder()))\n .addSort(\"_uid\", SortOrder.ASC)\n .get();\n assertHitCount(response, 1);\n@@ -1060,7 +1061,7 @@ public void testDontExplode() throws Exception {\n indexRandom(true, requests);\n \n SearchResponse response = client().prepareSearch(\"index1\")\n- .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value1\")).innerHit(new InnerHitBuilder().setSize(ArrayUtil.MAX_ARRAY_LENGTH - 1)))\n+ .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value1\"), ScoreMode.None).innerHit(new InnerHitBuilder().setSize(ArrayUtil.MAX_ARRAY_LENGTH - 1)))\n .addSort(\"_uid\", SortOrder.ASC)\n .get();\n assertNoFailures(response);\n@@ -1078,7 +1079,7 @@ public void testDontExplode() throws Exception {\n .get();\n \n response = client().prepareSearch(\"index2\")\n- .setQuery(nestedQuery(\"nested\", matchQuery(\"nested.field\", \"value1\")).innerHit(new InnerHitBuilder().setSize(ArrayUtil.MAX_ARRAY_LENGTH - 1)))\n+ .setQuery(nestedQuery(\"nested\", matchQuery(\"nested.field\", \"value1\"), ScoreMode.Avg).innerHit(new InnerHitBuilder().setSize(ArrayUtil.MAX_ARRAY_LENGTH - 1)))\n .addSort(\"_uid\", SortOrder.ASC)\n .get();\n assertNoFailures(response);\n@@ -1097,7 +1098,7 @@ public void testTopLevelInnerHitsWithQueryInnerHits() throws Exception {\n InnerHitsBuilder innerHitsBuilder = new InnerHitsBuilder();\n innerHitsBuilder.addInnerHit(\"my-inner-hit\", new InnerHitBuilder().setParentChildType(\"child\"));\n SearchResponse response = client().prepareSearch(\"index1\")\n- .setQuery(hasChildQuery(\"child\", new MatchAllQueryBuilder()).innerHit(new InnerHitBuilder()))\n+ .setQuery(hasChildQuery(\"child\", new MatchAllQueryBuilder(), ScoreMode.None).innerHit(new InnerHitBuilder()))\n .innerHits(innerHitsBuilder)\n .get();\n assertHitCount(response, 1);", "filename": "core/src/test/java/org/elasticsearch/search/innerhits/InnerHitsIT.java", "status": "modified" }, { "diff": "@@ -100,11 +100,11 @@ public void testSimpleNested() throws Exception {\n assertThat(searchResponse.getHits().totalHits(), equalTo(0L));\n \n // now, do a nested query\n- searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1_1\"))).get();\n+ searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1_1\"), ScoreMode.Avg)).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1_1\"))).setSearchType(SearchType.DFS_QUERY_THEN_FETCH).get();\n+ searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1_1\"), ScoreMode.Avg)).setSearchType(SearchType.DFS_QUERY_THEN_FETCH).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n@@ -129,19 +129,19 @@ public void testSimpleNested() throws Exception {\n assertDocumentCount(\"test\", 6);\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n- boolQuery().must(termQuery(\"nested1.n_field1\", \"n_value1_1\")).must(termQuery(\"nested1.n_field2\", \"n_value2_1\")))).execute().actionGet();\n+ boolQuery().must(termQuery(\"nested1.n_field1\", \"n_value1_1\")).must(termQuery(\"nested1.n_field2\", \"n_value2_1\")), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n // filter\n searchResponse = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).mustNot(nestedQuery(\"nested1\",\n- boolQuery().must(termQuery(\"nested1.n_field1\", \"n_value1_1\")).must(termQuery(\"nested1.n_field2\", \"n_value2_1\"))))).execute().actionGet();\n+ boolQuery().must(termQuery(\"nested1.n_field1\", \"n_value1_1\")).must(termQuery(\"nested1.n_field2\", \"n_value2_1\")), ScoreMode.Avg))).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n // check with type prefix\n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n- boolQuery().must(termQuery(\"nested1.n_field1\", \"n_value1_1\")).must(termQuery(\"nested1.n_field2\", \"n_value2_1\")))).execute().actionGet();\n+ boolQuery().must(termQuery(\"nested1.n_field1\", \"n_value1_1\")).must(termQuery(\"nested1.n_field2\", \"n_value2_1\")), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n@@ -153,11 +153,11 @@ public void testSimpleNested() throws Exception {\n flush();\n assertDocumentCount(\"test\", 3);\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1_1\"))).execute().actionGet();\n+ searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1_1\"), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n- searchResponse = client().prepareSearch(\"test\").setTypes(\"type1\", \"type2\").setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1_1\"))).execute().actionGet();\n+ searchResponse = client().prepareSearch(\"test\").setTypes(\"type1\", \"type2\").setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1_1\"), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n }\n@@ -191,42 +191,42 @@ public void testMultiNested() throws Exception {\n \n // do some multi nested queries\n SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n- termQuery(\"nested1.field1\", \"1\"))).execute().actionGet();\n+ termQuery(\"nested1.field1\", \"1\"), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1.nested2\",\n- termQuery(\"nested1.nested2.field2\", \"2\"))).execute().actionGet();\n+ termQuery(\"nested1.nested2.field2\", \"2\"), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n- boolQuery().must(termQuery(\"nested1.field1\", \"1\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"2\"))))).execute().actionGet();\n+ boolQuery().must(termQuery(\"nested1.field1\", \"1\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"2\"), ScoreMode.Avg)), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n- boolQuery().must(termQuery(\"nested1.field1\", \"1\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"3\"))))).execute().actionGet();\n+ boolQuery().must(termQuery(\"nested1.field1\", \"1\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"3\"), ScoreMode.Avg)), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n- boolQuery().must(termQuery(\"nested1.field1\", \"1\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"4\"))))).execute().actionGet();\n+ boolQuery().must(termQuery(\"nested1.field1\", \"1\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"4\"), ScoreMode.Avg)), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(0L));\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n- boolQuery().must(termQuery(\"nested1.field1\", \"1\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"5\"))))).execute().actionGet();\n+ boolQuery().must(termQuery(\"nested1.field1\", \"1\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"5\"), ScoreMode.Avg)), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(0L));\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n- boolQuery().must(termQuery(\"nested1.field1\", \"4\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"5\"))))).execute().actionGet();\n+ boolQuery().must(termQuery(\"nested1.field1\", \"4\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"5\"), ScoreMode.Avg)), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1L));\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n- boolQuery().must(termQuery(\"nested1.field1\", \"4\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"2\"))))).execute().actionGet();\n+ boolQuery().must(termQuery(\"nested1.field1\", \"4\")).must(nestedQuery(\"nested1.nested2\", termQuery(\"nested1.nested2.field2\", \"2\"), ScoreMode.Avg)), ScoreMode.Avg)).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(0L));\n }\n@@ -310,7 +310,7 @@ public void testExplain() throws Exception {\n .execute().actionGet();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1\")).scoreMode(ScoreMode.Total))\n+ .setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1\"), ScoreMode.Total))\n .setExplain(true)\n .execute().actionGet();\n assertNoFailures(searchResponse);\n@@ -988,7 +988,7 @@ public void testNestedSortingWithNestedFilterAsFilter() throws Exception {\n .addSort(SortBuilders.fieldSort(\"users.first\")\n .order(SortOrder.ASC)\n .setNestedPath(\"users\")\n- .setNestedFilter(nestedQuery(\"users.workstations\", termQuery(\"users.workstations.stationid\", \"s5\"))))\n+ .setNestedFilter(nestedQuery(\"users.workstations\", termQuery(\"users.workstations.stationid\", \"s5\"), ScoreMode.Avg)))\n .get();\n assertNoFailures(searchResponse);\n assertHitCount(searchResponse, 2);\n@@ -1044,7 +1044,7 @@ public void testCheckFixedBitSetCache() throws Exception {\n \n // only when querying with nested the fixed bitsets are loaded\n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(nestedQuery(\"array1\", termQuery(\"array1.field1\", \"value1\")))\n+ .setQuery(nestedQuery(\"array1\", termQuery(\"array1.field1\", \"value1\"), ScoreMode.Avg))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(5L));", "filename": "core/src/test/java/org/elasticsearch/search/nested/SimpleNestedIT.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.query;\n \n+import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.util.English;\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n@@ -1773,7 +1774,7 @@ public void testIndicesQuerySkipParsing() throws Exception {\n //has_child fails if executed on \"simple\" index\n try {\n client().prepareSearch(\"simple\")\n- .setQuery(hasChildQuery(\"child\", matchQuery(\"text\", \"value\"))).get();\n+ .setQuery(hasChildQuery(\"child\", matchQuery(\"text\", \"value\"), ScoreMode.None)).get();\n fail(\"Should have failed as has_child query can only be executed against parent-child types\");\n } catch (SearchPhaseExecutionException e) {\n assertThat(e.shardFailures().length, greaterThan(0));\n@@ -1784,7 +1785,7 @@ public void testIndicesQuerySkipParsing() throws Exception {\n \n //has_child doesn't get parsed for \"simple\" index\n SearchResponse searchResponse = client().prepareSearch(\"related\", \"simple\")\n- .setQuery(indicesQuery(hasChildQuery(\"child\", matchQuery(\"text\", \"value2\")), \"related\")\n+ .setQuery(indicesQuery(hasChildQuery(\"child\", matchQuery(\"text\", \"value2\"), ScoreMode.None), \"related\")\n .noMatchQuery(matchQuery(\"text\", \"value1\"))).get();\n assertHitCount(searchResponse, 2L);\n assertSearchHits(searchResponse, \"1\", \"2\");", "filename": "core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java", "status": "modified" }, { "diff": "@@ -433,7 +433,7 @@ public void testBulkUpdateDocAsUpsertWithParent() throws Exception {\n \n //we check that the _parent field was set on the child document by using the has parent query\n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.hasParentQuery(\"parent\", QueryBuilders.matchAllQuery()))\n+ .setQuery(QueryBuilders.hasParentQuery(\"parent\", QueryBuilders.matchAllQuery(), false))\n .get();\n \n assertNoFailures(searchResponse);\n@@ -468,7 +468,7 @@ public void testBulkUpdateUpsertWithParent() throws Exception {\n client().admin().indices().prepareRefresh(\"test\").get();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.hasParentQuery(\"parent\", QueryBuilders.matchAllQuery()))\n+ .setQuery(QueryBuilders.hasParentQuery(\"parent\", QueryBuilders.matchAllQuery(), false))\n .get();\n \n assertSearchHits(searchResponse, \"child1\");", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/BulkTests.java", "status": "modified" }, { "diff": "@@ -102,8 +102,8 @@ private void createParentChildDocs(String indexName) throws Exception {\n .setSource(\"foo\", \"bar\").setRouting(\"united states\"));\n \n findsCountry = idsQuery(\"country\").addIds(\"united states\");\n- findsCity = hasParentQuery(\"country\", findsCountry);\n- findsNeighborhood = hasParentQuery(\"city\", findsCity);\n+ findsCity = hasParentQuery(\"country\", findsCountry, false);\n+ findsNeighborhood = hasParentQuery(\"city\", findsCity, false);\n \n // Make sure we built the parent/child relationship\n assertSearchHits(client().prepareSearch(indexName).setQuery(findsCity).get(), \"pittsburgh\");", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexParentChildTests.java", "status": "modified" } ] }
{ "body": "Query string queries ignore the default operator on terms split from an analyzed prefix query.\n\n``` sh\ncurl -XGET 'localhost:9200/test/doc/_search?pretty=true' -d '{\n \"query\": {\n \"query_string\": {\n \"query\": \"apples-oranges*\",\n \"default_operator\": \"and\",\n \"analyze_wildcard\": true\n }\n }\n}'\n```\n\nThe above will search for apples OR oranges, even though the default operator is set to AND.\n\nThis gist will reproduce the bug, showing that the above query will match a document with just \"apples\" or just \"oranges\":\nhttps://gist.github.com/3375594\n", "comments": [ { "body": "The problem can be seen in [line 533 of MapperQueryParser.java](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/apache/lucene/queryParser/MapperQueryParser.java#L533) which creates a boolean query out of the analyzed terms using a hardcoded `BooleanClause.Occur.SHOULD`. The occur should be changed to MUST when the default_operator is AND.\n", "created_at": "2012-08-17T04:11:20Z" }, { "body": "A workaround for this issue is to use `\"minimum_should_match\": \"100%\"`\n", "created_at": "2012-08-18T05:41:43Z" }, { "body": "I am not too sure what the default value should be, I see cases where actually using `should` regardless of what you set in the operator make sense..., trying to think whats best to do here..., possibly another flag?\n", "created_at": "2012-08-21T12:44:08Z" }, { "body": "I think that stacked tokens (tokens in the same position) should be OR'ed, while tokens in different positions should be AND'ed.\n", "created_at": "2014-07-25T07:15:31Z" }, { "body": "In fact the wildcard should only apply to the term (or terms) in the last position.\n", "created_at": "2014-07-25T08:09:25Z" } ], "number": 2183, "title": "Analyzed wildcard always uses OR operator on split terms" }
{ "body": "- Tokens in the same position are grouped in a SynonymQuery.\n- The default operator is applied on tokens in different positions.\n- The wildcard is applied to the terms in the last position only.\n\nFixes #2183\n", "number": 17711, "review_comments": [ { "body": "no need for `? true : false`\n", "created_at": "2016-04-13T12:47:37Z" }, { "body": "if all sub queries are term queries with the same boost, it would be nice to build a SynonymQuery instead\n", "created_at": "2016-04-13T12:55:29Z" }, { "body": "sure, thanks\n", "created_at": "2016-04-13T12:57:41Z" }, { "body": "why the toLowerCase?\n", "created_at": "2016-04-13T12:58:47Z" }, { "body": "I am fine with doing it in a follow-up PR if that works better for you\n", "created_at": "2016-04-13T12:59:42Z" }, { "body": "Simple copy/paste ;) This is not needed anyway and I guess it was there because this function is supposed to check queries generated from an analyzer. I'll remove it.\n", "created_at": "2016-04-13T13:25:22Z" }, { "body": "Good idea, I'll add it.\n", "created_at": "2016-04-13T13:25:25Z" }, { "body": "Then maybe we can just do `assertEquals(new PrefixQuery(new Term(field, value)), query)`?\n", "created_at": "2016-04-13T13:27:00Z" }, { "body": "Oups I forgot to update my intellij configuration.\n", "created_at": "2016-04-13T13:35:46Z" }, { "body": "beware that wildcard imports will cause the build to fail\n", "created_at": "2016-04-13T13:35:47Z" } ], "title": "Apply the default operator on analyzed wildcard in query_string builder:" }
{ "commits": [ { "message": "Apply the default operator on analyzed wildcard in query_string builder:\n * Tokens in the same position are grouped into a SynonymQuery..\n * The default operator is applied on tokens in different positions.\n * The wildcard is applied to the terms in the last position only.\nFixes #2183" } ], "files": [ { "diff": "@@ -14,7 +14,6 @@\n files start to pass. -->\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]apache[/\\\\]lucene[/\\\\]queries[/\\\\]BlendedTermQuery.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]apache[/\\\\]lucene[/\\\\]queries[/\\\\]ExtendedCommonTermsQuery.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]apache[/\\\\]lucene[/\\\\]queryparser[/\\\\]classic[/\\\\]MapperQueryParser.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]apache[/\\\\]lucene[/\\\\]search[/\\\\]postingshighlight[/\\\\]CustomPostingsHighlighter.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]apache[/\\\\]lucene[/\\\\]search[/\\\\]vectorhighlight[/\\\\]CustomFieldQuery.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]Version.java\" checks=\"LineLength\" />\n@@ -1121,7 +1120,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]HasParentQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]MoreLikeThisQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]MultiMatchQueryBuilderTests.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]QueryStringQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]RandomQueryBuilder.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]RangeQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]ScoreModeTests.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;\n+import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanQuery;\n@@ -32,6 +33,7 @@\n import org.apache.lucene.search.MultiPhraseQuery;\n import org.apache.lucene.search.PhraseQuery;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.SynonymQuery;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.automaton.RegExp;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -105,7 +107,8 @@ public void reset(QueryParserSettings settings) {\n }\n \n /**\n- * We override this one so we can get the fuzzy part to be treated as string, so people can do: \"age:10~5\" or \"timestamp:2012-10-10~5d\"\n+ * We override this one so we can get the fuzzy part to be treated as string,\n+ * so people can do: \"age:10~5\" or \"timestamp:2012-10-10~5d\"\n */\n @Override\n Query handleBareFuzzy(String qfield, Token fuzzySlop, String termImage) throws ParseException {\n@@ -277,7 +280,8 @@ protected Query getFieldQuery(String field, String queryText, int slop) throws P\n }\n \n @Override\n- protected Query getRangeQuery(String field, String part1, String part2, boolean startInclusive, boolean endInclusive) throws ParseException {\n+ protected Query getRangeQuery(String field, String part1, String part2,\n+ boolean startInclusive, boolean endInclusive) throws ParseException {\n if (\"*\".equals(part1)) {\n part1 = null;\n }\n@@ -324,7 +328,8 @@ protected Query getRangeQuery(String field, String part1, String part2, boolean\n }\n }\n \n- private Query getRangeQuerySingle(String field, String part1, String part2, boolean startInclusive, boolean endInclusive) {\n+ private Query getRangeQuerySingle(String field, String part1, String part2,\n+ boolean startInclusive, boolean endInclusive) {\n currentFieldType = context.fieldMapper(field);\n if (currentFieldType != null) {\n if (lowercaseExpandedTerms && currentFieldType.tokenized()) {\n@@ -335,8 +340,10 @@ private Query getRangeQuerySingle(String field, String part1, String part2, bool\n try {\n Query rangeQuery;\n if (currentFieldType instanceof DateFieldMapper.DateFieldType && settings.timeZone() != null) {\n- DateFieldMapper.DateFieldType dateFieldType = (DateFieldMapper.DateFieldType) this.currentFieldType;\n- rangeQuery = dateFieldType.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null);\n+ DateFieldMapper.DateFieldType dateFieldType =\n+ (DateFieldMapper.DateFieldType) this.currentFieldType;\n+ rangeQuery = dateFieldType.rangeQuery(part1, part2, startInclusive, endInclusive,\n+ settings.timeZone(), null);\n } else {\n rangeQuery = currentFieldType.rangeQuery(part1, part2, startInclusive, endInclusive);\n }\n@@ -393,7 +400,8 @@ private Query getFuzzyQuerySingle(String field, String termStr, String minSimila\n currentFieldType = context.fieldMapper(field);\n if (currentFieldType != null) {\n try {\n- return currentFieldType.fuzzyQuery(termStr, Fuzziness.build(minSimilarity), fuzzyPrefixLength, settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions);\n+ return currentFieldType.fuzzyQuery(termStr, Fuzziness.build(minSimilarity),\n+ fuzzyPrefixLength, settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions);\n } catch (RuntimeException e) {\n if (settings.lenient()) {\n return null;\n@@ -408,7 +416,8 @@ private Query getFuzzyQuerySingle(String field, String termStr, String minSimila\n protected Query newFuzzyQuery(Term term, float minimumSimilarity, int prefixLength) {\n String text = term.text();\n int numEdits = FuzzyQuery.floatToEdits(minimumSimilarity, text.codePointCount(0, text.length()));\n- FuzzyQuery query = new FuzzyQuery(term, numEdits, prefixLength, settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions);\n+ FuzzyQuery query = new FuzzyQuery(term, numEdits, prefixLength,\n+ settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions);\n QueryParsers.setRewriteMethod(query, settings.fuzzyRewriteMethod());\n return query;\n }\n@@ -487,7 +496,7 @@ private Query getPossiblyAnalyzedPrefixQuery(String field, String termStr) throw\n if (!settings.analyzeWildcard()) {\n return super.getPrefixQuery(field, termStr);\n }\n- List<String> tlist;\n+ List<List<String> > tlist;\n // get Analyzer from superclass and tokenize the term\n TokenStream source = null;\n try {\n@@ -498,31 +507,66 @@ private Query getPossiblyAnalyzedPrefixQuery(String field, String termStr) throw\n return super.getPrefixQuery(field, termStr);\n }\n tlist = new ArrayList<>();\n+ List<String> currentPos = new ArrayList<>();\n CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class);\n+ PositionIncrementAttribute posAtt = source.addAttribute(PositionIncrementAttribute.class);\n \n while (true) {\n try {\n if (!source.incrementToken()) break;\n } catch (IOException e) {\n break;\n }\n- tlist.add(termAtt.toString());\n+ if (currentPos.isEmpty() == false && posAtt.getPositionIncrement() > 0) {\n+ tlist.add(currentPos);\n+ currentPos = new ArrayList<>();\n+ }\n+ currentPos.add(termAtt.toString());\n+ }\n+ if (currentPos.isEmpty() == false) {\n+ tlist.add(currentPos);\n }\n } finally {\n if (source != null) {\n IOUtils.closeWhileHandlingException(source);\n }\n }\n \n- if (tlist.size() == 1) {\n- return super.getPrefixQuery(field, tlist.get(0));\n+\n+ if (tlist.size() == 1 && tlist.get(0).size() == 1) {\n+ return super.getPrefixQuery(field, tlist.get(0).get(0));\n } else {\n- // build a boolean query with prefix on each one...\n+ // build a boolean query with prefix on the last position only.\n List<BooleanClause> clauses = new ArrayList<>();\n- for (String token : tlist) {\n- clauses.add(new BooleanClause(super.getPrefixQuery(field, token), BooleanClause.Occur.SHOULD));\n+ for (int pos = 0; pos < tlist.size(); pos++) {\n+ List<String> plist = tlist.get(pos);\n+ boolean isLastPos = (pos == tlist.size()-1);\n+ Query posQuery;\n+ if (plist.size() == 1) {\n+ if (isLastPos) {\n+ posQuery = getPrefixQuery(field, plist.get(0));\n+ } else {\n+ posQuery = newTermQuery(new Term(field, plist.get(0)));\n+ }\n+ } else if (isLastPos == false) {\n+ // build a synonym query for terms in the same position.\n+ Term[] terms = new Term[plist.size()];\n+ for (int i = 0; i < plist.size(); i++) {\n+ terms[i] = new Term(field, plist.get(i));\n+ }\n+ posQuery = new SynonymQuery(terms);\n+ } else {\n+ List<BooleanClause> innerClauses = new ArrayList<>();\n+ for (String token : plist) {\n+ innerClauses.add(new BooleanClause(getPrefixQuery(field, token),\n+ BooleanClause.Occur.SHOULD));\n+ }\n+ posQuery = getBooleanQueryCoordDisabled(innerClauses);\n+ }\n+ clauses.add(new BooleanClause(posQuery,\n+ getDefaultOperator() == Operator.AND ? BooleanClause.Occur.MUST : BooleanClause.Occur.SHOULD));\n }\n- return getBooleanQueryCoordDisabled(clauses);\n+ return getBooleanQuery(clauses);\n }\n }\n \n@@ -724,7 +768,8 @@ private Query getRegexpQuerySingle(String field, String termStr) throws ParseExc\n }\n Query query = null;\n if (currentFieldType.tokenized() == false) {\n- query = currentFieldType.regexpQuery(termStr, RegExp.ALL, maxDeterminizedStates, multiTermRewriteMethod, context);\n+ query = currentFieldType.regexpQuery(termStr, RegExp.ALL,\n+ maxDeterminizedStates, multiTermRewriteMethod, context);\n }\n if (query == null) {\n query = super.getRegexpQuery(field, termStr);\n@@ -741,7 +786,7 @@ private Query getRegexpQuerySingle(String field, String termStr) throws ParseExc\n setAnalyzer(oldAnalyzer);\n }\n }\n- \n+\n /**\n * @deprecated review all use of this, don't rely on coord\n */", "filename": "core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import com.fasterxml.jackson.core.io.JsonStringEncoder;\n \n import org.apache.lucene.search.BoostQuery;\n+import org.apache.lucene.search.PrefixQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.spans.SpanBoostQuery;\n@@ -637,6 +638,13 @@ protected static void assertTermQuery(Query query, String field, String value) {\n assertThat(termQuery.getTerm().text().toLowerCase(Locale.ROOT), equalTo(value.toLowerCase(Locale.ROOT)));\n }\n \n+ protected static void assertPrefixQuery(Query query, String field, String value) {\n+ assertThat(query, instanceOf(PrefixQuery.class));\n+ PrefixQuery prefixQuery = (PrefixQuery) query;\n+ assertThat(prefixQuery.getPrefix().field(), equalTo(field));\n+ assertThat(prefixQuery.getPrefix().text(), equalTo(value));\n+ }\n+\n /**\n * Test serialization and deserialization of the test query.\n */", "filename": "core/src/test/java/org/elasticsearch/index/query/AbstractQueryTestCase.java", "status": "modified" }, { "diff": "@@ -58,7 +58,8 @@ protected QueryStringQueryBuilder doCreateTestQueryBuilder() {\n }\n QueryStringQueryBuilder queryStringQueryBuilder = new QueryStringQueryBuilder(query);\n if (randomBoolean()) {\n- queryStringQueryBuilder.defaultField(randomBoolean() ? STRING_FIELD_NAME : randomAsciiOfLengthBetween(1, 10));\n+ queryStringQueryBuilder.defaultField(randomBoolean() ?\n+ STRING_FIELD_NAME : randomAsciiOfLengthBetween(1, 10));\n }\n if (randomBoolean()) {\n int numFields = randomIntBetween(1, 5);\n@@ -145,7 +146,8 @@ protected QueryStringQueryBuilder doCreateTestQueryBuilder() {\n }\n \n @Override\n- protected void doAssertLuceneQuery(QueryStringQueryBuilder queryBuilder, Query query, QueryShardContext context) throws IOException {\n+ protected void doAssertLuceneQuery(QueryStringQueryBuilder queryBuilder,\n+ Query query, QueryShardContext context) throws IOException {\n if (\"\".equals(queryBuilder.queryString())) {\n assertThat(query, instanceOf(MatchNoDocsQuery.class));\n } else {\n@@ -173,7 +175,10 @@ public void testToQueryTermQuery() throws IOException {\n \n public void testToQueryPhraseQuery() throws IOException {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n- Query query = queryStringQuery(\"\\\"term1 term2\\\"\").defaultField(STRING_FIELD_NAME).phraseSlop(3).toQuery(createShardContext());\n+ Query query = queryStringQuery(\"\\\"term1 term2\\\"\")\n+ .defaultField(STRING_FIELD_NAME)\n+ .phraseSlop(3)\n+ .toQuery(createShardContext());\n assertThat(query, instanceOf(DisjunctionMaxQuery.class));\n DisjunctionMaxQuery disjunctionMaxQuery = (DisjunctionMaxQuery) query;\n assertThat(disjunctionMaxQuery.getDisjuncts().size(), equalTo(1));\n@@ -204,7 +209,8 @@ public void testToQueryBoosts() throws Exception {\n boostQuery = (BoostQuery) boostQuery.getQuery();\n assertThat(boostQuery.getBoost(), equalTo(2.0f));\n \n- queryStringQuery = queryStringQuery(\"((\" + STRING_FIELD_NAME + \":boosted^2) AND (\" + STRING_FIELD_NAME + \":foo^1.5))^3\");\n+ queryStringQuery =\n+ queryStringQuery(\"((\" + STRING_FIELD_NAME + \":boosted^2) AND (\" + STRING_FIELD_NAME + \":foo^1.5))^3\");\n query = queryStringQuery.toQuery(shardContext);\n assertThat(query, instanceOf(BoostQuery.class));\n boostQuery = (BoostQuery) query;\n@@ -226,27 +232,38 @@ public void testToQueryBoosts() throws Exception {\n \n public void testToQueryMultipleTermsBooleanQuery() throws Exception {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n- Query query = queryStringQuery(\"test1 test2\").field(STRING_FIELD_NAME).useDisMax(false).toQuery(createShardContext());\n+ Query query = queryStringQuery(\"test1 test2\").field(STRING_FIELD_NAME)\n+ .useDisMax(false)\n+ .toQuery(createShardContext());\n assertThat(query, instanceOf(BooleanQuery.class));\n BooleanQuery bQuery = (BooleanQuery) query;\n assertThat(bQuery.clauses().size(), equalTo(2));\n- assertThat(assertBooleanSubQuery(query, TermQuery.class, 0).getTerm(), equalTo(new Term(STRING_FIELD_NAME, \"test1\")));\n- assertThat(assertBooleanSubQuery(query, TermQuery.class, 1).getTerm(), equalTo(new Term(STRING_FIELD_NAME, \"test2\")));\n+ assertThat(assertBooleanSubQuery(query, TermQuery.class, 0).getTerm(),\n+ equalTo(new Term(STRING_FIELD_NAME, \"test1\")));\n+ assertThat(assertBooleanSubQuery(query, TermQuery.class, 1).getTerm(),\n+ equalTo(new Term(STRING_FIELD_NAME, \"test2\")));\n }\n \n public void testToQueryMultipleFieldsBooleanQuery() throws Exception {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n- Query query = queryStringQuery(\"test\").field(STRING_FIELD_NAME).field(STRING_FIELD_NAME_2).useDisMax(false).toQuery(createShardContext());\n+ Query query = queryStringQuery(\"test\").field(STRING_FIELD_NAME)\n+ .field(STRING_FIELD_NAME_2)\n+ .useDisMax(false)\n+ .toQuery(createShardContext());\n assertThat(query, instanceOf(BooleanQuery.class));\n BooleanQuery bQuery = (BooleanQuery) query;\n assertThat(bQuery.clauses().size(), equalTo(2));\n- assertThat(assertBooleanSubQuery(query, TermQuery.class, 0).getTerm(), equalTo(new Term(STRING_FIELD_NAME, \"test\")));\n- assertThat(assertBooleanSubQuery(query, TermQuery.class, 1).getTerm(), equalTo(new Term(STRING_FIELD_NAME_2, \"test\")));\n+ assertThat(assertBooleanSubQuery(query, TermQuery.class, 0).getTerm(),\n+ equalTo(new Term(STRING_FIELD_NAME, \"test\")));\n+ assertThat(assertBooleanSubQuery(query, TermQuery.class, 1).getTerm(),\n+ equalTo(new Term(STRING_FIELD_NAME_2, \"test\")));\n }\n \n public void testToQueryMultipleFieldsDisMaxQuery() throws Exception {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n- Query query = queryStringQuery(\"test\").field(STRING_FIELD_NAME).field(STRING_FIELD_NAME_2).useDisMax(true).toQuery(createShardContext());\n+ Query query = queryStringQuery(\"test\").field(STRING_FIELD_NAME).field(STRING_FIELD_NAME_2)\n+ .useDisMax(true)\n+ .toQuery(createShardContext());\n assertThat(query, instanceOf(DisjunctionMaxQuery.class));\n DisjunctionMaxQuery disMaxQuery = (DisjunctionMaxQuery) query;\n List<Query> disjuncts = disMaxQuery.getDisjuncts();\n@@ -260,23 +277,59 @@ public void testToQueryFieldsWildcard() throws Exception {\n assertThat(query, instanceOf(BooleanQuery.class));\n BooleanQuery bQuery = (BooleanQuery) query;\n assertThat(bQuery.clauses().size(), equalTo(2));\n- assertThat(assertBooleanSubQuery(query, TermQuery.class, 0).getTerm(), equalTo(new Term(STRING_FIELD_NAME, \"test\")));\n- assertThat(assertBooleanSubQuery(query, TermQuery.class, 1).getTerm(), equalTo(new Term(STRING_FIELD_NAME_2, \"test\")));\n+ assertThat(assertBooleanSubQuery(query, TermQuery.class, 0).getTerm(),\n+ equalTo(new Term(STRING_FIELD_NAME, \"test\")));\n+ assertThat(assertBooleanSubQuery(query, TermQuery.class, 1).getTerm(),\n+ equalTo(new Term(STRING_FIELD_NAME_2, \"test\")));\n }\n \n public void testToQueryDisMaxQuery() throws Exception {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n- Query query = queryStringQuery(\"test\").field(STRING_FIELD_NAME, 2.2f).field(STRING_FIELD_NAME_2).useDisMax(true).toQuery(createShardContext());\n+ Query query = queryStringQuery(\"test\").field(STRING_FIELD_NAME, 2.2f)\n+ .field(STRING_FIELD_NAME_2)\n+ .useDisMax(true)\n+ .toQuery(createShardContext());\n assertThat(query, instanceOf(DisjunctionMaxQuery.class));\n DisjunctionMaxQuery disMaxQuery = (DisjunctionMaxQuery) query;\n List<Query> disjuncts = disMaxQuery.getDisjuncts();\n assertTermOrBoostQuery(disjuncts.get(0), STRING_FIELD_NAME, \"test\", 2.2f);\n assertTermOrBoostQuery(disjuncts.get(1), STRING_FIELD_NAME_2, \"test\", 1.0f);\n }\n \n+ public void testToQueryPrefixQuery() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ for (Operator op : Operator.values()) {\n+ Query query = queryStringQuery(\"foo-bar-foobar*\")\n+ .defaultField(STRING_FIELD_NAME)\n+ .analyzeWildcard(true)\n+ .analyzer(\"standard\")\n+ .defaultOperator(op)\n+ .toQuery(createShardContext());\n+ assertThat(query, instanceOf(BooleanQuery.class));\n+ BooleanQuery bq = (BooleanQuery) query;\n+ assertThat(bq.clauses().size(), equalTo(3));\n+ String[] expectedTerms = new String[]{\"foo\", \"bar\", \"foobar\"};\n+ for (int i = 0; i < bq.clauses().size(); i++) {\n+ BooleanClause clause = bq.clauses().get(i);\n+ if (i != bq.clauses().size() - 1) {\n+ assertTermQuery(clause.getQuery(), STRING_FIELD_NAME, expectedTerms[i]);\n+ } else {\n+ assertPrefixQuery(clause.getQuery(), STRING_FIELD_NAME, expectedTerms[i]);\n+ }\n+ if (op == Operator.AND) {\n+ assertThat(clause.getOccur(), equalTo(BooleanClause.Occur.MUST));\n+ } else {\n+ assertThat(clause.getOccur(), equalTo(BooleanClause.Occur.SHOULD));\n+ }\n+ }\n+ }\n+ }\n+\n public void testToQueryRegExpQuery() throws Exception {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n- Query query = queryStringQuery(\"/foo*bar/\").defaultField(STRING_FIELD_NAME).maxDeterminizedStates(5000).toQuery(createShardContext());\n+ Query query = queryStringQuery(\"/foo*bar/\").defaultField(STRING_FIELD_NAME)\n+ .maxDeterminizedStates(5000)\n+ .toQuery(createShardContext());\n assertThat(query, instanceOf(RegexpQuery.class));\n RegexpQuery regexpQuery = (RegexpQuery) query;\n assertTrue(regexpQuery.toString().contains(\"/foo*bar/\"));\n@@ -344,7 +397,8 @@ public void testToQueryBooleanQueryMultipleBoosts() throws Exception {\n \n float mainBoost = 2.0f / randomIntBetween(3, 20);\n boosts[boosts.length - 1] = mainBoost;\n- QueryStringQueryBuilder queryStringQueryBuilder = new QueryStringQueryBuilder(queryString).field(STRING_FIELD_NAME)\n+ QueryStringQueryBuilder queryStringQueryBuilder =\n+ new QueryStringQueryBuilder(queryString).field(STRING_FIELD_NAME)\n .minimumShouldMatch(\"2\").boost(mainBoost);\n Query query = queryStringQueryBuilder.toQuery(createShardContext());\n \n@@ -359,14 +413,17 @@ public void testToQueryBooleanQueryMultipleBoosts() throws Exception {\n BooleanQuery booleanQuery = (BooleanQuery) query;\n assertThat(booleanQuery.getMinimumNumberShouldMatch(), equalTo(2));\n assertThat(booleanQuery.clauses().get(0).getOccur(), equalTo(BooleanClause.Occur.SHOULD));\n- assertThat(booleanQuery.clauses().get(0).getQuery(), equalTo(new TermQuery(new Term(STRING_FIELD_NAME, \"foo\"))));\n+ assertThat(booleanQuery.clauses().get(0).getQuery(),\n+ equalTo(new TermQuery(new Term(STRING_FIELD_NAME, \"foo\"))));\n assertThat(booleanQuery.clauses().get(1).getOccur(), equalTo(BooleanClause.Occur.SHOULD));\n- assertThat(booleanQuery.clauses().get(1).getQuery(), equalTo(new TermQuery(new Term(STRING_FIELD_NAME, \"bar\"))));\n+ assertThat(booleanQuery.clauses().get(1).getQuery(),\n+ equalTo(new TermQuery(new Term(STRING_FIELD_NAME, \"bar\"))));\n }\n \n public void testToQueryPhraseQueryBoostAndSlop() throws IOException {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n- QueryStringQueryBuilder queryStringQueryBuilder = new QueryStringQueryBuilder(\"\\\"test phrase\\\"~2\").field(STRING_FIELD_NAME, 5f);\n+ QueryStringQueryBuilder queryStringQueryBuilder =\n+ new QueryStringQueryBuilder(\"\\\"test phrase\\\"~2\").field(STRING_FIELD_NAME, 5f);\n Query query = queryStringQueryBuilder.toQuery(createShardContext());\n assertThat(query, instanceOf(DisjunctionMaxQuery.class));\n DisjunctionMaxQuery disjunctionMaxQuery = (DisjunctionMaxQuery) query;", "filename": "core/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java", "status": "modified" } ] }
{ "body": "Run the following sense script against a cluster running on master:\n\n```\nPUT test/doc/1\n{\n \"i\": 1\n}\n\nPUT test/doc/2\n{\n \"i\": 2\n}\n\nPUT test/doc/3\n{\n \"i\": 4\n}\n\nPUT test/doc/4\n{\n \"i\": 4\n}\n\nPUT test/doc/5\n{\n \"i\": 5\n}\n\nPUT test/doc/6\n{\n \"i\": 6\n}\n\nGET test/_search\n{\n \"size\": 0,\n \"aggs\": {\n \"histo\": {\n \"histogram\": {\n \"field\": \"i\",\n \"interval\": 1\n },\n \"aggs\": {\n \"moving_avg\": {\n \"moving_avg\": {\n \"buckets_path\": \"_count\"\n }\n }\n }\n },\n \"ext_stats_1\": {\n \"extended_stats_bucket\": {\n \"buckets_path\": \"histo>moving_avg\",\n \"sigma\": 1.0\n }\n },\n \"ext_stats_2\": {\n \"extended_stats_bucket\": {\n \"buckets_path\": \"histo>moving_avg\",\n \"sigma\": 2.0\n }\n },\n \"ext_stats_3\": {\n \"extended_stats_bucket\": {\n \"buckets_path\": \"histo>moving_avg\",\n \"sigma\": 3.0\n }\n }\n }\n}\n```\n\nNote that there is no document with `\"i\": 3` and 2 documents with `\"i\": 4`. The request asks for 3 `extended_stats_bucket` aggregations with sigma values of 1, 2 and 3. The response you get for the `extended_stats_bucket` aggregations is:\n\n```\n \"ext_stats_1\": {\n \"count\": 4,\n \"min\": 0.6666666666666666,\n \"max\": 1,\n \"avg\": 0.9166666666666666,\n \"sum\": 3.6666666666666665,\n \"sum_of_squares\": 4.444444444444445,\n \"variance\": 0.2708333333333335,\n \"std_deviation\": 0.5204164998665334,\n \"std_deviation_bounds\": {\n \"upper\": 0.9166666666666666,\n \"lower\": 0.9166666666666666\n }\n },\n \"ext_stats_2\": {\n \"count\": 4,\n \"min\": 0.6666666666666666,\n \"max\": 1,\n \"avg\": 0.9166666666666666,\n \"sum\": 3.6666666666666665,\n \"sum_of_squares\": 4.444444444444445,\n \"variance\": 0.2708333333333335,\n \"std_deviation\": 0.5204164998665334,\n \"std_deviation_bounds\": {\n \"upper\": 0.9166666666666666,\n \"lower\": 0.9166666666666666\n }\n },\n \"ext_stats_3\": {\n \"count\": 4,\n \"min\": 0.6666666666666666,\n \"max\": 1,\n \"avg\": 0.9166666666666666,\n \"sum\": 3.6666666666666665,\n \"sum_of_squares\": 4.444444444444445,\n \"variance\": 0.2708333333333335,\n \"std_deviation\": 0.5204164998665334,\n \"std_deviation_bounds\": {\n \"upper\": 0.9166666666666666,\n \"lower\": 0.9166666666666666\n }\n }\n```\n\nNote that `std_deviation_bounds.upper` and `std_deviation_bounds.lower` are the same for all three aggs and are all equal to the `avg`.\n", "comments": [], "number": 17701, "title": "Missing buckets produce incorrect results in extended_stats_bucket aggregation when using _count" }
{ "body": "Previously the sigma variable in the `extended_stats_bucket` pipeline aggregation was not being serialised in `ExtendedStatsBucketPipelineAggregator`. This PR fixes that.\n\nIt also corrects the initial value of sumOfSquares to be 0.\n\nCloses #17701\n", "number": 17703, "review_comments": [], "title": "Adds serialisation of sigma to extended_stats_bucket pipeline aggregation" }
{ "commits": [ { "message": "Aggregations: Adds serialisation of sigma to extended_stats_bucket pipeline aggregation\n\nPreviously the sigma variable in the `extended_stats_bucket` pipeline aggregation was not being serialised in `ExtendedStatsBucketPipelineAggregator`. This PR fixes that.\n\nIt also corrects the initial value of sumOfSquares to be 0.\n\nCloses #17701" } ], "files": [ { "diff": "@@ -111,15 +111,23 @@ protected void preCollection() {\n protected abstract void collectBucketValue(String bucketKey, Double bucketValue);\n \n @Override\n- public void doReadFrom(StreamInput in) throws IOException {\n+ public final void doReadFrom(StreamInput in) throws IOException {\n format = in.readValueFormat();\n gapPolicy = GapPolicy.readFrom(in);\n+ innerReadFrom(in);\n+ }\n+\n+ protected void innerReadFrom(StreamInput in) throws IOException {\n }\n \n @Override\n- public void doWriteTo(StreamOutput out) throws IOException {\n+ public final void doWriteTo(StreamOutput out) throws IOException {\n out.writeValueFormat(format);\n gapPolicy.writeTo(out);\n+ innerWriteTo(out);\n+ }\n+\n+ protected void innerWriteTo(StreamOutput out) throws IOException {\n }\n \n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregator.java", "status": "modified" }, { "diff": "@@ -107,14 +107,12 @@ protected InternalAggregation buildAggregation(List<PipelineAggregator> pipeline\n }\n \n @Override\n- public void doReadFrom(StreamInput in) throws IOException {\n- super.doReadFrom(in);\n+ public void innerReadFrom(StreamInput in) throws IOException {\n percents = in.readDoubleArray();\n }\n \n @Override\n- public void doWriteTo(StreamOutput out) throws IOException {\n- super.doWriteTo(out);\n+ public void innerWriteTo(StreamOutput out) throws IOException {\n out.writeDoubleArray(percents);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregator.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended;\n \n import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregation.Type;\n@@ -78,7 +79,7 @@ protected void preCollection() {\n count = 0;\n min = Double.POSITIVE_INFINITY;\n max = Double.NEGATIVE_INFINITY;\n- sumOfSqrs = 1;\n+ sumOfSqrs = 0;\n }\n \n @Override\n@@ -95,4 +96,13 @@ protected InternalAggregation buildAggregation(List<PipelineAggregator> pipeline\n return new InternalExtendedStatsBucket(name(), count, sum, min, max, sumOfSqrs, sigma, format, pipelineAggregators, metadata);\n }\n \n+ @Override\n+ protected void innerReadFrom(StreamInput in) throws IOException {\n+ sigma = in.readDouble();\n+ }\n+\n+ @Override\n+ protected void innerWriteTo(StreamOutput out) throws IOException {\n+ out.writeDouble(sigma);\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketPipelineAggregator.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order;\n import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;\n+import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats.Bounds;\n import org.elasticsearch.search.aggregations.metrics.sum.Sum;\n import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended.ExtendedStatsBucket;\n@@ -64,7 +65,7 @@ public class ExtendedStatsBucketIT extends ESIntegTestCase {\n public void setupSuiteScopeCluster() throws Exception {\n assertAcked(client().admin().indices().prepareCreate(\"idx\")\n .addMapping(\"type\", \"tag\", \"type=keyword\").get());\n- createIndex(\"idx_unmapped\");\n+ createIndex(\"idx_unmapped\", \"idx_gappy\");\n \n numDocs = randomIntBetween(6, 20);\n interval = randomIntBetween(2, 5);\n@@ -86,6 +87,13 @@ public void setupSuiteScopeCluster() throws Exception {\n valueCounts[bucket]++;\n }\n \n+ for (int i = 0; i < 6; i++) {\n+ // creates 6 documents where the value of the field is 0, 1, 2, 3,\n+ // 3, 5\n+ builders.add(client().prepareIndex(\"idx_gappy\", \"type\", \"\" + i).setSource(\n+ jsonBuilder().startObject().field(SINGLE_VALUED_FIELD_NAME, i == 4 ? 3 : i).endObject()));\n+ }\n+\n assertAcked(prepareCreate(\"empty_bucket_idx\").addMapping(\"type\", SINGLE_VALUED_FIELD_NAME, \"type=integer\"));\n for (int i = 0; i < 2; i++) {\n builders.add(client().prepareIndex(\"empty_bucket_idx\", \"type\", \"\" + i).setSource(\n@@ -95,6 +103,57 @@ public void setupSuiteScopeCluster() throws Exception {\n ensureSearchable();\n }\n \n+ /**\n+ * Test for https://github.com/elastic/elasticsearch/issues/17701\n+ */\n+ public void testGappyIndexWithSigma() {\n+ double sigma = randomDoubleBetween(1.0, 6.0, true);\n+ SearchResponse response = client().prepareSearch(\"idx_gappy\")\n+ .addAggregation(histogram(\"histo\").field(SINGLE_VALUED_FIELD_NAME).interval(1L))\n+ .addAggregation(extendedStatsBucket(\"extended_stats_bucket\", \"histo>_count\").sigma(sigma)).execute().actionGet();\n+ assertSearchResponse(response);\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(6));\n+\n+ for (int i = 0; i < 6; ++i) {\n+ long expectedDocCount;\n+ if (i == 3) {\n+ expectedDocCount = 2;\n+ } else if (i == 4) {\n+ expectedDocCount = 0;\n+ } else {\n+ expectedDocCount = 1;\n+ }\n+ Histogram.Bucket bucket = buckets.get(i);\n+ assertThat(\"i: \" + i, bucket, notNullValue());\n+ assertThat(\"i: \" + i, ((Number) bucket.getKey()).longValue(), equalTo((long) i));\n+ assertThat(\"i: \" + i, bucket.getDocCount(), equalTo(expectedDocCount));\n+ }\n+\n+ ExtendedStatsBucket extendedStatsBucketValue = response.getAggregations().get(\"extended_stats_bucket\");\n+ long count = 6L;\n+ double sum = 1.0 + 1.0 + 1.0 + 2.0 + 0.0 + 1.0;\n+ double sumOfSqrs = 1.0 + 1.0 + 1.0 + 4.0 + 0.0 + 1.0;\n+ double avg = sum / count;\n+ double var = (sumOfSqrs - ((sum * sum) / count)) / count;\n+ double stdDev = Math.sqrt(var);\n+ assertThat(extendedStatsBucketValue, notNullValue());\n+ assertThat(extendedStatsBucketValue.getName(), equalTo(\"extended_stats_bucket\"));\n+ assertThat(extendedStatsBucketValue.getMin(), equalTo(0.0));\n+ assertThat(extendedStatsBucketValue.getMax(), equalTo(2.0));\n+ assertThat(extendedStatsBucketValue.getCount(), equalTo(count));\n+ assertThat(extendedStatsBucketValue.getSum(), equalTo(sum));\n+ assertThat(extendedStatsBucketValue.getAvg(), equalTo(avg));\n+ assertThat(extendedStatsBucketValue.getSumOfSquares(), equalTo(sumOfSqrs));\n+ assertThat(extendedStatsBucketValue.getVariance(), equalTo(var));\n+ assertThat(extendedStatsBucketValue.getStdDeviation(), equalTo(stdDev));\n+ assertThat(extendedStatsBucketValue.getStdDeviationBound(Bounds.LOWER), equalTo(avg - (sigma * stdDev)));\n+ assertThat(extendedStatsBucketValue.getStdDeviationBound(Bounds.UPPER), equalTo(avg + (sigma * stdDev)));\n+ }\n+\n public void testDocCountTopLevel() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\")\n .addAggregation(histogram(\"histo\").field(SINGLE_VALUED_FIELD_NAME).interval(interval)\n@@ -113,7 +172,7 @@ public void testDocCountTopLevel() throws Exception {\n int count = 0;\n double min = Double.POSITIVE_INFINITY;\n double max = Double.NEGATIVE_INFINITY;\n- double sumOfSquares = 1;\n+ double sumOfSquares = 0;\n for (int i = 0; i < numValueBuckets; ++i) {\n Histogram.Bucket bucket = buckets.get(i);\n assertThat(bucket, notNullValue());\n@@ -170,7 +229,7 @@ public void testDocCountAsSubAgg() throws Exception {\n int count = 0;\n double min = Double.POSITIVE_INFINITY;\n double max = Double.NEGATIVE_INFINITY;\n- double sumOfSquares = 1;\n+ double sumOfSquares = 0;\n for (int j = 0; j < numValueBuckets; ++j) {\n Histogram.Bucket bucket = buckets.get(j);\n assertThat(bucket, notNullValue());\n@@ -211,7 +270,7 @@ public void testMetricTopLevel() throws Exception {\n int count = 0;\n double min = Double.POSITIVE_INFINITY;\n double max = Double.NEGATIVE_INFINITY;\n- double sumOfSquares = 1;\n+ double sumOfSquares = 0;\n for (int i = 0; i < interval; ++i) {\n Terms.Bucket bucket = buckets.get(i);\n assertThat(bucket, notNullValue());\n@@ -271,7 +330,7 @@ public void testMetricAsSubAgg() throws Exception {\n int count = 0;\n double min = Double.POSITIVE_INFINITY;\n double max = Double.NEGATIVE_INFINITY;\n- double sumOfSquares = 1;\n+ double sumOfSquares = 0;\n for (int j = 0; j < numValueBuckets; ++j) {\n Histogram.Bucket bucket = buckets.get(j);\n assertThat(bucket, notNullValue());\n@@ -334,7 +393,7 @@ public void testMetricAsSubAggWithInsertZeros() throws Exception {\n int count = 0;\n double min = Double.POSITIVE_INFINITY;\n double max = Double.NEGATIVE_INFINITY;\n- double sumOfSquares = 1;\n+ double sumOfSquares = 0;\n for (int j = 0; j < numValueBuckets; ++j) {\n Histogram.Bucket bucket = buckets.get(j);\n assertThat(bucket, notNullValue());\n@@ -436,7 +495,7 @@ public void testNested() throws Exception {\n int aggTermsCount = 0;\n double min = Double.POSITIVE_INFINITY;\n double max = Double.NEGATIVE_INFINITY;\n- double sumOfSquares = 1;\n+ double sumOfSquares = 0;\n for (int i = 0; i < interval; ++i) {\n Terms.Bucket termsBucket = termsBuckets.get(i);\n assertThat(termsBucket, notNullValue());", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/ExtendedStatsBucketIT.java", "status": "modified" } ] }
{ "body": "When a shard can no longer remain on a node (because disk is full or some exclude filters are set in place), it is moved to a different node. Currently, rebalancing constraints are not taken into consideration when this move takes place. An example for this is #14057: `cluster.routing.allocation.cluster_concurrent_rebalance` is set to 1 but 80 shards are moved off the node in one go.\n\nThis PR checks rebalancing constraints when shards are moved from a node they can no longer remain on. The constraints that are affected by this are the following:\n- cluster.routing.allocation.cluster_concurrent_rebalance\n- cluster.routing.allocation.allow_rebalance\n- cluster.routing.rebalance.enable\n- index.routing.rebalance.enable\n- rebalance_only_when_active\n\nCloses #14057\n", "comments": [ { "body": "this looks awesome! LGTM\n", "created_at": "2015-10-23T09:11:59Z" } ], "number": 14259, "title": "Check rebalancing constraints when shards are moved from a node they can no longer remain on" }
{ "body": "#14259 added a check to honor rebalancing policies (i.e., rebalance only on green state) when moving shards due to changes in allocation filtering rules. The rebalancing policy is there to make sure that we don't try to even out the number of shards per node when we are still missing shards. However, it should not interfere with explicit user commands (allocation filtering) or things like the disk threshold wanting to move shards because of a node hitting the high water mark.\n#14259 was done to address #14057 where people reported that using allocation filtering caused many shards to be moved at once. This is however a none issue - with 1.7 (where the issue was reported) and 2.x, we protect recovery source nodes by limitting the number of concurrent data streams they can open (i.e., we can have many recoveries, but they will be throttled). In 5.0 we came up with a simpler and more understandable approach where we have a hard limit on the number of outgoing recoveries per node (on top of the incoming recoveries we already had).\n", "number": 17698, "review_comments": [ { "body": "can we add a comment here why we don't check \"canRebalance\"? (maybe link to this PR?)\n", "created_at": "2016-04-13T08:35:23Z" }, { "body": "sure.\n", "created_at": "2016-04-13T18:44:14Z" } ], "title": "Rebalancing policy shouldn't prevent hard allocation decisions" }
{ "commits": [ { "message": "Rebalancing policy shouldn't prevent hard allocation decisions\n\n#14259 added a check to honor rebalancing policies (i.e., rebalance only on green state) when moving shards due to changes in allocation filtering rules. The rebalancing policy is there to make sure that we don't try to even out the number of shards per node when we are still missing shards. However, it should not interfere with explicit user commands (allocation filtering) or things like the disk threshold wanting to move shards because of a node hitting the high water mark.\n\n#14259 was done to address #14057 where people reported that using allocation filtering caused many shards to be moved at once. This is however a none issue - with 1.7 (where the issue was reported) and 2.x, we protect recovery source nodes by limitting the number of concurrent data streams they can open (i.e., we can have many recoveries, but they will be throttled). In 5.0 we came up with a simpler and more understandable approach where we have a hard limit on the number of outgoing recoveries per node (on top of the incoming recoveries we already had)." }, { "message": "add a comment" } ], "files": [ { "diff": "@@ -556,9 +556,9 @@ private boolean moveShard(NodeSorter sorter, ShardRouting shardRouting, ModelNod\n for (ModelNode currentNode : sorter.modelNodes) {\n if (currentNode != sourceNode) {\n RoutingNode target = currentNode.getRoutingNode();\n+ // don't use canRebalance as we want hard filtering rules to apply. See #17698\n Decision allocationDecision = allocation.deciders().canAllocate(shardRouting, target, allocation);\n- Decision rebalanceDecision = allocation.deciders().canRebalance(shardRouting, allocation);\n- if (allocationDecision.type() == Type.YES && rebalanceDecision.type() == Type.YES) { // TODO maybe we can respect throttling here too?\n+ if (allocationDecision.type() == Type.YES) { // TODO maybe we can respect throttling here too?\n sourceNode.removeShard(shardRouting);\n ShardRouting targetRelocatingShard = routingNodes.relocate(shardRouting, target.nodeId(), allocation.clusterInfo().getShardSize(shardRouting, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));\n currentNode.addShard(targetRelocatingShard);", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java", "status": "modified" }, { "diff": "@@ -62,12 +62,12 @@ public class ThrottlingAllocationDecider extends AllocationDecider {\n Property.Dynamic, Property.NodeScope);\n public static final Setting<Integer> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_INCOMING_RECOVERIES_SETTING =\n new Setting<>(\"cluster.routing.allocation.node_concurrent_incoming_recoveries\",\n- (s) -> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING.getRaw(s),\n+ CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING::getRaw,\n (s) -> Setting.parseInt(s, 0, \"cluster.routing.allocation.node_concurrent_incoming_recoveries\"),\n Property.Dynamic, Property.NodeScope);\n public static final Setting<Integer> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_OUTGOING_RECOVERIES_SETTING =\n new Setting<>(\"cluster.routing.allocation.node_concurrent_outgoing_recoveries\",\n- (s) -> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING.getRaw(s),\n+ CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING::getRaw,\n (s) -> Setting.parseInt(s, 0, \"cluster.routing.allocation.node_concurrent_outgoing_recoveries\"),\n Property.Dynamic, Property.NodeScope);\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ThrottlingAllocationDecider.java", "status": "modified" }, { "diff": "@@ -165,7 +165,7 @@ public void testIndexFilters() {\n }\n }\n \n- public void testRebalanceAfterShardsCannotRemainOnNode() {\n+ public void testConcurrentRecoveriesAfterShardsCannotRemainOnNode() {\n AllocationService strategy = createAllocationService(Settings.builder().build());\n \n logger.info(\"Building initial routing table\");\n@@ -199,14 +199,14 @@ public void testRebalanceAfterShardsCannotRemainOnNode() {\n \n logger.info(\"--> disable allocation for node1 and reroute\");\n strategy = createAllocationService(Settings.builder()\n- .put(\"cluster.routing.allocation.cluster_concurrent_rebalance\", \"1\")\n+ .put(\"cluster.routing.allocation.node_concurrent_recoveries\", \"1\")\n .put(\"cluster.routing.allocation.exclude.tag1\", \"value1\")\n .build());\n \n logger.info(\"--> move shards from node1 to node2\");\n routingTable = strategy.reroute(clusterState, \"reroute\").routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- logger.info(\"--> check that concurrent rebalance only allows 1 shard to move\");\n+ logger.info(\"--> check that concurrent recoveries only allows 1 shard to move\");\n assertThat(clusterState.getRoutingNodes().node(node1.getId()).numberOfShardsWithState(STARTED), equalTo(1));\n assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(INITIALIZING), equalTo(1));\n assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(STARTED), equalTo(2));", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/FilterRoutingTests.java", "status": "modified" } ] }
{ "body": "The setting cluster.routing.allocation.cluster_concurrent_rebalance appears to be ignored when moving shard off a node that has been excluded from allocation with the setting cluster.routing.allocation.exclude._ip .\n\nES Version: 1.7.2\n\nRepeatedly experienced with the following steps:\nset cluster.routing.allocation.cluster_concurrent_rebalance to 1\nexclude node from allocation with cluster.routing.allocation.exclude._ip\n\nThis results in 80 shards rebalancing at a time until all shards are removed from the excluded node.\n", "comments": [ { "body": "Investigation required\n", "created_at": "2015-10-16T09:19:20Z" }, { "body": "I can confirm the issue. The root cause is that rebalancing constraints are not taken into consideration when shards that can no longer remain on a node need to be moved. AFAICS, it affects the following options:\n- cluster.routing.allocation.cluster_concurrent_rebalance\n- cluster.routing.allocation.allow_rebalance\n- cluster.routing.rebalance.enable\n- index.routing.rebalance.enable\n- rebalance_only_when_active\n", "created_at": "2015-10-22T16:05:01Z" } ], "number": 14057, "title": "Cluster concurrent rebalance ignored on node allocation exclusion" }
{ "body": "#14259 added a check to honor rebalancing policies (i.e., rebalance only on green state) when moving shards due to changes in allocation filtering rules. The rebalancing policy is there to make sure that we don't try to even out the number of shards per node when we are still missing shards. However, it should not interfere with explicit user commands (allocation filtering) or things like the disk threshold wanting to move shards because of a node hitting the high water mark.\n#14259 was done to address #14057 where people reported that using allocation filtering caused many shards to be moved at once. This is however a none issue - with 1.7 (where the issue was reported) and 2.x, we protect recovery source nodes by limitting the number of concurrent data streams they can open (i.e., we can have many recoveries, but they will be throttled). In 5.0 we came up with a simpler and more understandable approach where we have a hard limit on the number of outgoing recoveries per node (on top of the incoming recoveries we already had).\n", "number": 17698, "review_comments": [ { "body": "can we add a comment here why we don't check \"canRebalance\"? (maybe link to this PR?)\n", "created_at": "2016-04-13T08:35:23Z" }, { "body": "sure.\n", "created_at": "2016-04-13T18:44:14Z" } ], "title": "Rebalancing policy shouldn't prevent hard allocation decisions" }
{ "commits": [ { "message": "Rebalancing policy shouldn't prevent hard allocation decisions\n\n#14259 added a check to honor rebalancing policies (i.e., rebalance only on green state) when moving shards due to changes in allocation filtering rules. The rebalancing policy is there to make sure that we don't try to even out the number of shards per node when we are still missing shards. However, it should not interfere with explicit user commands (allocation filtering) or things like the disk threshold wanting to move shards because of a node hitting the high water mark.\n\n#14259 was done to address #14057 where people reported that using allocation filtering caused many shards to be moved at once. This is however a none issue - with 1.7 (where the issue was reported) and 2.x, we protect recovery source nodes by limitting the number of concurrent data streams they can open (i.e., we can have many recoveries, but they will be throttled). In 5.0 we came up with a simpler and more understandable approach where we have a hard limit on the number of outgoing recoveries per node (on top of the incoming recoveries we already had)." }, { "message": "add a comment" } ], "files": [ { "diff": "@@ -556,9 +556,9 @@ private boolean moveShard(NodeSorter sorter, ShardRouting shardRouting, ModelNod\n for (ModelNode currentNode : sorter.modelNodes) {\n if (currentNode != sourceNode) {\n RoutingNode target = currentNode.getRoutingNode();\n+ // don't use canRebalance as we want hard filtering rules to apply. See #17698\n Decision allocationDecision = allocation.deciders().canAllocate(shardRouting, target, allocation);\n- Decision rebalanceDecision = allocation.deciders().canRebalance(shardRouting, allocation);\n- if (allocationDecision.type() == Type.YES && rebalanceDecision.type() == Type.YES) { // TODO maybe we can respect throttling here too?\n+ if (allocationDecision.type() == Type.YES) { // TODO maybe we can respect throttling here too?\n sourceNode.removeShard(shardRouting);\n ShardRouting targetRelocatingShard = routingNodes.relocate(shardRouting, target.nodeId(), allocation.clusterInfo().getShardSize(shardRouting, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));\n currentNode.addShard(targetRelocatingShard);", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java", "status": "modified" }, { "diff": "@@ -62,12 +62,12 @@ public class ThrottlingAllocationDecider extends AllocationDecider {\n Property.Dynamic, Property.NodeScope);\n public static final Setting<Integer> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_INCOMING_RECOVERIES_SETTING =\n new Setting<>(\"cluster.routing.allocation.node_concurrent_incoming_recoveries\",\n- (s) -> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING.getRaw(s),\n+ CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING::getRaw,\n (s) -> Setting.parseInt(s, 0, \"cluster.routing.allocation.node_concurrent_incoming_recoveries\"),\n Property.Dynamic, Property.NodeScope);\n public static final Setting<Integer> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_OUTGOING_RECOVERIES_SETTING =\n new Setting<>(\"cluster.routing.allocation.node_concurrent_outgoing_recoveries\",\n- (s) -> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING.getRaw(s),\n+ CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING::getRaw,\n (s) -> Setting.parseInt(s, 0, \"cluster.routing.allocation.node_concurrent_outgoing_recoveries\"),\n Property.Dynamic, Property.NodeScope);\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ThrottlingAllocationDecider.java", "status": "modified" }, { "diff": "@@ -165,7 +165,7 @@ public void testIndexFilters() {\n }\n }\n \n- public void testRebalanceAfterShardsCannotRemainOnNode() {\n+ public void testConcurrentRecoveriesAfterShardsCannotRemainOnNode() {\n AllocationService strategy = createAllocationService(Settings.builder().build());\n \n logger.info(\"Building initial routing table\");\n@@ -199,14 +199,14 @@ public void testRebalanceAfterShardsCannotRemainOnNode() {\n \n logger.info(\"--> disable allocation for node1 and reroute\");\n strategy = createAllocationService(Settings.builder()\n- .put(\"cluster.routing.allocation.cluster_concurrent_rebalance\", \"1\")\n+ .put(\"cluster.routing.allocation.node_concurrent_recoveries\", \"1\")\n .put(\"cluster.routing.allocation.exclude.tag1\", \"value1\")\n .build());\n \n logger.info(\"--> move shards from node1 to node2\");\n routingTable = strategy.reroute(clusterState, \"reroute\").routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- logger.info(\"--> check that concurrent rebalance only allows 1 shard to move\");\n+ logger.info(\"--> check that concurrent recoveries only allows 1 shard to move\");\n assertThat(clusterState.getRoutingNodes().node(node1.getId()).numberOfShardsWithState(STARTED), equalTo(1));\n assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(INITIALIZING), equalTo(1));\n assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(STARTED), equalTo(2));", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/FilterRoutingTests.java", "status": "modified" } ] }
{ "body": "Although `elasticsearch-plugin` claims to support verbose mode, adding `-v` doesn't seem to do anything. In particular, it no longer prints out the URL of the plugin it is about to install, which is needed for offline installation.\n", "comments": [ { "body": "This isn't a bug with `-v`, it is simply something that is no longer printed. Here's a PR to add it: #17662\n", "created_at": "2016-04-11T20:35:05Z" } ], "number": 17529, "title": "Verbose mode no longer works in elasticsearch-plugin" }
{ "body": "closes #17529\n", "number": 17662, "review_comments": [], "title": "Cli: Add verbose output with zip url when installing plugin" }
{ "commits": [ { "message": "Plugin cli: Add verbose output with zip url when installing plugin\n\ncloses #17529" } ], "files": [ { "diff": "@@ -197,7 +197,7 @@ private Path download(Terminal terminal, String pluginId, Path tmpDir) throws Ex\n version);\n }\n terminal.println(\"-> Downloading \" + pluginId + \" from elastic\");\n- return downloadZipAndChecksum(url, tmpDir);\n+ return downloadZipAndChecksum(terminal, url, tmpDir);\n }\n \n // now try as maven coordinates, a valid URL would only have a colon and slash\n@@ -206,16 +206,17 @@ private Path download(Terminal terminal, String pluginId, Path tmpDir) throws Ex\n String mavenUrl = String.format(Locale.ROOT, \"https://repo1.maven.org/maven2/%1$s/%2$s/%3$s/%2$s-%3$s.zip\",\n coordinates[0].replace(\".\", \"/\") /* groupId */, coordinates[1] /* artifactId */, coordinates[2] /* version */);\n terminal.println(\"-> Downloading \" + pluginId + \" from maven central\");\n- return downloadZipAndChecksum(mavenUrl, tmpDir);\n+ return downloadZipAndChecksum(terminal, mavenUrl, tmpDir);\n }\n \n // fall back to plain old URL\n terminal.println(\"-> Downloading \" + URLDecoder.decode(pluginId, \"UTF-8\"));\n- return downloadZip(pluginId, tmpDir);\n+ return downloadZip(terminal, pluginId, tmpDir);\n }\n \n /** Downloads a zip from the url, into a temp file under the given temp dir. */\n- private Path downloadZip(String urlString, Path tmpDir) throws IOException {\n+ private Path downloadZip(Terminal terminal, String urlString, Path tmpDir) throws IOException {\n+ terminal.println(VERBOSE, \"Retrieving zip from \" + urlString);\n URL url = new URL(urlString);\n Path zip = Files.createTempFile(tmpDir, null, \".zip\");\n try (InputStream in = url.openStream()) {\n@@ -226,8 +227,8 @@ private Path downloadZip(String urlString, Path tmpDir) throws IOException {\n }\n \n /** Downloads a zip from the url, as well as a SHA1 checksum, and checks the checksum. */\n- private Path downloadZipAndChecksum(String urlString, Path tmpDir) throws Exception {\n- Path zip = downloadZip(urlString, tmpDir);\n+ private Path downloadZipAndChecksum(Terminal terminal, String urlString, Path tmpDir) throws Exception {\n+ Path zip = downloadZip(terminal, urlString, tmpDir);\n \n URL checksumUrl = new URL(urlString + \".sha1\");\n final String expectedChecksum;", "filename": "core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java", "status": "modified" } ] }
{ "body": "This issue is to document deletion problems with shadow replica indices that were found while working on #17265. A separate PR #17638 that improves the naming of methods in the `IndicesService` also contains tests or added assertions to existing tests that reveal the issues below and must be enabled as part of any PR that fixes the issues.\n\n**No. 1**\nThe index file deletion logic that is triggered in `IndicesService#deleteIndexStore(String reason, Index index, IndexSettings indexSettings` checks before deleting files to see if the index is not a shadow replica, or if it is, ensure that it has been closed before (so that no other nodes are holding resources to it). An issue with this is that it is too strict of a check, so that if a shadow replica index is deleted, if it was not previously closed, the index folder itself is not deleted and remains on the file system (an empty folder). So one of the issues that needs fixing is to ensure index directories are deleted even on shadow replica index deletes. The following tests have commented out assertions to test this behavior once fixed:\n- `IndexWithShadowReplicaIT#testIndexWithShadowReplicasCleansUp` \n- `IndexWithShadowReplicaIT#testShadowReplicaNaturalRelocation` \n\nNote that shared shard data is cleaned up properly in a shadow replica index that is not closed, as the shard data is deleted by the `StoreCloseListener`. This is verified in the tests with the `assertPathHasBeenCleared` assert.\n\n**No. 2**\nThe issue with deleting a shadow replica index that was previously closed is that all of the index and shard data are potentially deleted simultaneously by each node that receives the delete operation and invokes `NodeEnvironment#deleteIndexDirectorySafe`. This can lead to race conditions where a node is trying to delete a file that was deleted by another node as both are walking the file system simultaneously (using Lucene's `IOUtils.rm`). This ends up logged as a warning in `IndicesService#deleteIndexStore(String reason, Index index, IndexSettings indexSettings` and the deletion is put on the pending queue. \n", "comments": [ { "body": "> none of its index files are deleted\n\nTo be clear - shard level folders are deleted and the index metadata is deleted as well. The problem is that we leave an empty folder behind.\n\nRe No 2.:\n\nDo we have specific issue with master nodes or is it specifically about nodes that do not host a shard of the shard from the shadow index? (i.e., master nodes but also some data nodes)\n", "created_at": "2016-04-13T07:36:42Z" }, { "body": "> To be clear - shard level folders are deleted and the index metadata is deleted as well. The problem is that we leave an empty folder behind.\n\nYes, that is correct. I changed the description to be more clear on this.\n\n> Do we have specific issue with master nodes or is it specifically about nodes that do not host a shard of the shard from the shadow index? (i.e., master nodes but also some data nodes)\n\nI think our tests prove that this isn't a specific issue, but only a general one as it relates to index folders not getting cleaned up. Since we are dropping the `testDeletingIndexWithDedicatedMasterNodes`, I will remove this point from the description as well.\n\nI believe the fundamental problems with shadow replica deletion are 1. the leaving behind of these (empty) index folders and 2. when a shadow replica has been closed and then deleted, all nodes try to delete the same shared shard data folder, in which case, the first node succeeds, but the remaining nodes try to delete already deleted files (with the potential for race conditions here if multiple nodes are deleting at the same time). This throws a warning in the logs and puts the delete in the pending queue. \n\nI will update the description of this issue accordingly.\n", "created_at": "2016-04-13T16:07:05Z" }, { "body": "@abeyad was there any discussion on this ? if not we should pick it and make a plan.\n", "created_at": "2016-05-12T10:03:09Z" }, { "body": "@bleskes nothing further has been done on this, its worth a discussion to see how best to handle the aforementioned scenarios.\n", "created_at": "2016-05-12T13:31:39Z" }, { "body": "Upon discussion with Boaz, these issues are known and have been there for a long time. They don't cause incorrect behavior, they just prevent shadow replicas from deleting cleanly. It remains to be discussed if this is important to tackle and if so, what the appropriate solutions are.\n\n@bleskes @clintongormley FYI\n", "created_at": "2016-05-12T14:30:35Z" }, { "body": "@abeyad @clintongormley how do we feel about this being a blocker for 5.0? It's currently marked as a blocker due to a `// norelease` in `IndexWithShadowReplicasIT` pointing to this issue\n", "created_at": "2016-08-08T19:53:44Z" }, { "body": "I'm inclined to say these are not blockers, because they result in improper full cleanup (empty directories laying around) or simultaneous deletes that are logged as such without tripping any errors. Also, adding the failed deletes to the pending queue just means they will get executed later and essentially be no-ops because there will be nothing to delete (as the deletion was already done successfully by one of the nodes). \n\nSo, its not great form, but given that shadow replicas aren't a ubiquitous feature and this issue doesn't actually cause incorrect behavior, I don't feel its a blocker.\n\nIf @clintongormley agrees, I can remove the the `// norelease` from the test.\n\n@dakrone what do you think?\n", "created_at": "2016-08-08T20:16:57Z" }, { "body": "+1\n", "created_at": "2016-08-11T17:04:09Z" }, { "body": "I removed the blocker label and removed the `//norelease` in 50b31ce6202683917bd007fde022d5019a501453\n", "created_at": "2016-08-11T17:09:53Z" }, { "body": "I would close this now that shadow replicas have been removed. I do not think we will go back to old branches and fix this problem.", "created_at": "2017-09-22T15:47:39Z" } ], "number": 17695, "title": "Shadow Replica indexes do not delete properly" }
{ "body": "This commit contains the following improvements/fixes:\n1. Renaming method names and variables to better reflect the purpose\n of the method and the semantics of the variable.\n2. For deleting indexes, replace the `closed` parameter passed to the delete index/store methods with obtaining the index's state from the `IndexSettings` that is already passed in.\n3. Adding tests to the `IndexWithShadowReplicaIT` suite, some of which show issues in the shadow replica delete process that are captured in #17695 \n", "number": 17638, "review_comments": [ { "body": "we should also say we return false if the folder doesn't exist\n", "created_at": "2016-04-11T09:10:34Z" }, { "body": "thinking about this some more - do we need to change this at all? doesn't seem like #17265 has this change\n", "created_at": "2016-04-11T09:19:46Z" }, { "body": "@bleskes #17625 also writes the same warnings to the logs - lots of \"Could not remove file\" errors because of file not found exceptions. This check is meant to guard against that by testing if there are even files to delete before trying to do the delete, which calls `IOUtils.rm`, which throws the exceptions.\n", "created_at": "2016-04-11T14:27:29Z" }, { "body": "I rather not make this change (and if we do, let's name the method \"delete*IfExists). If I understand correctly it's only needed for shadow indices, which we agreed to tackle in another PR.\n", "created_at": "2016-04-12T18:29:28Z" }, { "body": "same comment... let's leave shadows indices for later\n", "created_at": "2016-04-12T18:30:13Z" }, { "body": "since we have the metadata here (as part of IndexSettings), can we check here if the index is closed? this means we can remove the boolean parameter and not have to worry about how we got here\n", "created_at": "2016-04-12T18:31:41Z" }, { "body": "same about shadow indexes - if we only need this method for that, let's remove it\n", "created_at": "2016-04-12T18:33:04Z" }, { "body": "If we remove this, then we will remove the ability for the index dir to be deleted on a shadow replica index, so the index dir will just be laying around. I don't mind delaying this for the future PR, but just to make sure the basis are covered, wouldn't such an undeleted index dir lead to dangling indices being imported?\n", "created_at": "2016-04-12T18:39:22Z" }, { "body": "this is currently a problem as well, right? what deletes the folder now when we use, for example, dedicated master nodes? note that it's enough to delete the index state/metadata files to prevent dangling indices.\n\nAs said - if this is an existing shadow indexes problem, we need to document it via an issue/test - not solve it.\n", "created_at": "2016-04-12T18:42:33Z" }, { "body": "Then I think its not an issue, because we call `MetaDataStateFormat.deleteMetaState(nodeEnv.indexPaths(index));` in a finally block anyway. I'll revert out of these changes and just create the tests that manifest the problem for future work.\n", "created_at": "2016-04-12T18:47:12Z" }, { "body": "@bleskes To really test the scenario of dedicated master nodes only with a shadow replica index, it would seem we should simulate the situation where the master nodes don't have the shared data path mounted on its file system. Any thoughts on how to simulate this? Or is it not worth the effort here?\n", "created_at": "2016-04-13T02:31:28Z" }, { "body": "I don't think we need to get into this now. For now we can remove the test and just keep a test where we have some nodes with no shards assigned to them.\n", "created_at": "2016-04-13T07:42:56Z" }, { "body": "++ on adding this.\n", "created_at": "2016-04-13T07:43:58Z" }, { "body": "do we need a dedicated test for this? can't we randomize the number of nodes in the normal deletion tests? O.w. we will need to test this situation with both closed indices and open. It's a shame IMO\n", "created_at": "2016-04-13T07:45:05Z" }, { "body": "can't we just call the other variant with a Settings.EMPTY?\n", "created_at": "2016-04-13T07:54:16Z" }, { "body": "I needed one that takes an `IndexMetaData` as its first parameter (because I need to update it to set the state to `CLOSED` for the test). That's the main distinguishing reason to add this method.\n", "created_at": "2016-04-13T12:47:01Z" }, { "body": "Fair enough\n", "created_at": "2016-04-13T14:07:09Z" }, { "body": "I don't think we should keep this test. I dont' think it adds much value as is now (but I understand why you had it when trying things before!)\n", "created_at": "2016-04-14T14:23:45Z" }, { "body": "The reason I put this test in was to simulate the behavior of Point No. 2 in #17695 and when that issue gets resolved, we can remove the `AwaitsFix` annotation. What I think you're suggesting is, the fix for the issue of nodes concurrently deleting the same index directory may not involve a fix at such a low level, so we should just wait to see how we tackle it before writing such a low level test for it?\n", "created_at": "2016-04-14T14:31:39Z" }, { "body": "yes. This is too low level - I think we will fix it at the higher levels. I dont' think deleteIndexDirectoryUnderLock should be lenient \n", "created_at": "2016-04-14T14:51:37Z" }, { "body": "Makes sense, I'll remove it, then merge this branch. Thanks for the review @bleskes !\n", "created_at": "2016-04-14T14:57:07Z" } ], "title": "Improvements to the IndicesService class" }
{ "commits": [ { "message": "Improvements to the IndicesService class\n\nThis commit contains the following improvements/fixes:\n 1. When a shadow replica index is deleted, its index contents is also\ndeleted.\n 2. Renaming method names and variables to better reflect the purpose\nof the method and the semantics of the variable." }, { "message": "Next iteration of IndicesService improvements." }, { "message": "Further changes" }, { "message": "Fixes based on code review comments" } ], "files": [ { "diff": "@@ -1160,7 +1160,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]indices[/\\\\]IndicesLifecycleListenerIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]indices[/\\\\]IndicesLifecycleListenerSingleNodeTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]indices[/\\\\]IndicesOptionsIntegrationIT.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]indices[/\\\\]IndicesServiceTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]indices[/\\\\]analysis[/\\\\]PreBuiltAnalyzerIntegrationIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]indices[/\\\\]analyze[/\\\\]AnalyzeActionIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]indices[/\\\\]analyze[/\\\\]HunspellServiceIT.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -29,22 +29,11 @@\n import java.net.URL;\n import java.nio.charset.Charset;\n import java.nio.charset.CharsetDecoder;\n-import java.nio.file.DirectoryNotEmptyException;\n import java.nio.file.DirectoryStream;\n-import java.nio.file.FileAlreadyExistsException;\n-import java.nio.file.FileVisitResult;\n import java.nio.file.Files;\n import java.nio.file.Path;\n-import java.nio.file.SimpleFileVisitor;\n-import java.nio.file.StandardCopyOption;\n-import java.nio.file.attribute.BasicFileAttributes;\n-import java.util.Arrays;\n-import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.stream.StreamSupport;\n \n-import static java.nio.file.FileVisitResult.CONTINUE;\n-import static java.nio.file.FileVisitResult.SKIP_SUBTREE;\n-\n /**\n * Elasticsearch utils to work with {@link java.nio.file.Path}\n */", "filename": "core/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java", "status": "modified" }, { "diff": "@@ -190,8 +190,6 @@ public void onRemoval(ShardId shardId, String fieldName, boolean wasEvicted, lon\n });\n this.cleanInterval = INDICES_CACHE_CLEAN_INTERVAL_SETTING.get(settings);\n this.cacheCleaner = new CacheCleaner(indicesFieldDataCache, indicesRequestCache, logger, threadPool, this.cleanInterval);\n-\n-\n }\n \n @Override\n@@ -459,7 +457,7 @@ private void removeIndex(Index index, String reason, boolean delete) {\n final IndexSettings indexSettings = indexService.getIndexSettings();\n listener.afterIndexDeleted(indexService.index(), indexSettings.getSettings());\n // now we are done - try to wipe data on disk if possible\n- deleteIndexStore(reason, indexService.index(), indexSettings, false);\n+ deleteIndexStore(reason, indexService.index(), indexSettings);\n }\n } catch (IOException ex) {\n throw new ElasticsearchException(\"failed to remove index \" + index, ex);\n@@ -515,26 +513,32 @@ public void deleteIndex(Index index, String reason) throws IOException {\n removeIndex(index, reason, true);\n }\n \n- public void deleteClosedIndex(String reason, IndexMetaData metaData, ClusterState clusterState) {\n+ /**\n+ * Deletes an index that is not assigned to this node. This method cleans up all disk folders relating to the index\n+ * but does not deal with in-memory structures. For those call {@link #deleteIndex(Index, String)}\n+ */\n+ public void deleteUnassignedIndex(String reason, IndexMetaData metaData, ClusterState clusterState) {\n if (nodeEnv.hasNodeFile()) {\n String indexName = metaData.getIndex().getName();\n try {\n if (clusterState.metaData().hasIndex(indexName)) {\n final IndexMetaData index = clusterState.metaData().index(indexName);\n- throw new IllegalStateException(\"Can't delete closed index store for [\" + indexName + \"] - it's still part of the cluster state [\" + index.getIndexUUID() + \"] [\" + metaData.getIndexUUID() + \"]\");\n+ throw new IllegalStateException(\"Can't delete unassigned index store for [\" + indexName + \"] - it's still part of the cluster state [\" + index.getIndexUUID() + \"] [\" + metaData.getIndexUUID() + \"]\");\n }\n- deleteIndexStore(reason, metaData, clusterState, true);\n+ deleteIndexStore(reason, metaData, clusterState);\n } catch (IOException e) {\n- logger.warn(\"[{}] failed to delete closed index\", e, metaData.getIndex());\n+ logger.warn(\"[{}] failed to delete unassigned index (reason [{}])\", e, metaData.getIndex(), reason);\n }\n }\n }\n \n /**\n * Deletes the index store trying to acquire all shards locks for this index.\n * This method will delete the metadata for the index even if the actual shards can't be locked.\n+ *\n+ * Package private for testing\n */\n- public void deleteIndexStore(String reason, IndexMetaData metaData, ClusterState clusterState, boolean closed) throws IOException {\n+ void deleteIndexStore(String reason, IndexMetaData metaData, ClusterState clusterState) throws IOException {\n if (nodeEnv.hasNodeFile()) {\n synchronized (this) {\n Index index = metaData.getIndex();\n@@ -547,22 +551,25 @@ public void deleteIndexStore(String reason, IndexMetaData metaData, ClusterState\n // we do not delete the store if it is a master eligible node and the index is still in the cluster state\n // because we want to keep the meta data for indices around even if no shards are left here\n final IndexMetaData idxMeta = clusterState.metaData().index(index.getName());\n- throw new IllegalStateException(\"Can't delete closed index store for [\" + index.getName() + \"] - it's still part of the cluster state [\" + idxMeta.getIndexUUID() + \"] [\" + metaData.getIndexUUID() + \"]\");\n+ throw new IllegalStateException(\"Can't delete index store for [\" + index.getName() + \"] - it's still part of the \" +\n+ \"cluster state [\" + idxMeta.getIndexUUID() + \"] [\" + metaData.getIndexUUID() + \"], \" +\n+ \"we are master eligible, so will keep the index metadata even if no shards are left.\");\n }\n }\n final IndexSettings indexSettings = buildIndexSettings(metaData);\n- deleteIndexStore(reason, indexSettings.getIndex(), indexSettings, closed);\n+ deleteIndexStore(reason, indexSettings.getIndex(), indexSettings);\n }\n }\n \n- private void deleteIndexStore(String reason, Index index, IndexSettings indexSettings, boolean closed) throws IOException {\n+ private void deleteIndexStore(String reason, Index index, IndexSettings indexSettings) throws IOException {\n boolean success = false;\n try {\n // we are trying to delete the index store here - not a big deal if the lock can't be obtained\n // the store metadata gets wiped anyway even without the lock this is just best effort since\n // every shards deletes its content under the shard lock it owns.\n logger.debug(\"{} deleting index store reason [{}]\", index, reason);\n- if (canDeleteIndexContents(index, indexSettings, closed)) {\n+ if (canDeleteIndexContents(index, indexSettings)) {\n+ // its safe to delete all index metadata and shard data\n nodeEnv.deleteIndexDirectorySafe(index, 0, indexSettings);\n }\n success = true;\n@@ -617,11 +624,11 @@ public void deleteShardStore(String reason, ShardId shardId, ClusterState cluste\n logger.debug(\"{} deleted shard reason [{}]\", shardId, reason);\n \n if (clusterState.nodes().getLocalNode().isMasterNode() == false && // master nodes keep the index meta data, even if having no shards..\n- canDeleteIndexContents(shardId.getIndex(), indexSettings, false)) {\n+ canDeleteIndexContents(shardId.getIndex(), indexSettings)) {\n if (nodeEnv.findAllShardIds(shardId.getIndex()).isEmpty()) {\n try {\n // note that deleteIndexStore have more safety checks and may throw an exception if index was concurrently created.\n- deleteIndexStore(\"no longer used\", metaData, clusterState, false);\n+ deleteIndexStore(\"no longer used\", metaData, clusterState);\n } catch (Exception e) {\n // wrap the exception to indicate we already deleted the shard\n throw new ElasticsearchException(\"failed to delete unused index after deleting its last shard (\" + shardId + \")\", e);\n@@ -633,18 +640,19 @@ public void deleteShardStore(String reason, ShardId shardId, ClusterState cluste\n }\n \n /**\n- * This method returns true if the current node is allowed to delete the\n- * given index. If the index uses a shared filesystem this method always\n- * returns false.\n+ * This method returns true if the current node is allowed to delete the given index.\n+ * This is the case if the index is deleted in the metadata or there is no allocation\n+ * on the local node and the index isn't on a shared file system.\n * @param index {@code Index} to check whether deletion is allowed\n * @param indexSettings {@code IndexSettings} for the given index\n * @return true if the index can be deleted on this node\n */\n- public boolean canDeleteIndexContents(Index index, IndexSettings indexSettings, boolean closed) {\n- final IndexService indexService = indexService(index);\n- // Closed indices may be deleted, even if they are on a shared\n- // filesystem. Since it is closed we aren't deleting it for relocation\n- if (indexSettings.isOnSharedFilesystem() == false || closed) {\n+ public boolean canDeleteIndexContents(Index index, IndexSettings indexSettings) {\n+ // index contents can be deleted if the index is not on a shared file system,\n+ // or if its on a shared file system but its an already closed index (so all\n+ // its resources have already been relinquished)\n+ if (indexSettings.isOnSharedFilesystem() == false || indexSettings.getIndexMetaData().getState() == IndexMetaData.State.CLOSE) {\n+ final IndexService indexService = indexService(index);\n if (indexService == null && nodeEnv.hasNodeFile()) {\n return true;\n }", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -236,7 +236,7 @@ private void applyDeletedIndices(final ClusterChangedEvent event) {\n } else {\n final IndexMetaData metaData = previousState.metaData().getIndexSafe(index);\n indexSettings = new IndexSettings(metaData, settings);\n- indicesService.deleteClosedIndex(\"closed index no longer part of the metadata\", metaData, event.state());\n+ indicesService.deleteUnassignedIndex(\"closed index no longer part of the metadata\", metaData, event.state());\n }\n try {\n nodeIndexDeletedAction.nodeIndexDeleted(event.state(), index, indexSettings, localNodeId);", "filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n@@ -29,16 +30,21 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.RoutingNode;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n+import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.env.Environment;\n+import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShadowIndexShard;\n+import org.elasticsearch.index.store.FsDirectoryService;\n import org.elasticsearch.index.translog.TranslogStats;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.recovery.RecoveryTargetService;\n@@ -59,6 +65,7 @@\n import java.nio.file.Path;\n import java.util.ArrayList;\n import java.util.Collection;\n+import java.util.Collections;\n import java.util.List;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.CountDownLatch;\n@@ -87,9 +94,9 @@ private Settings nodeSettings(Path dataPath) {\n \n private Settings nodeSettings(String dataPath) {\n return Settings.builder()\n- .put(\"node.add_id_to_custom_path\", false)\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), false)\n .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), dataPath)\n- .put(\"index.store.fs.fs_lock\", randomFrom(\"native\", \"simple\"))\n+ .put(FsDirectoryService.INDEX_LOCK_FACTOR_SETTING.getKey(), randomFrom(\"native\", \"simple\"))\n .build();\n }\n \n@@ -543,20 +550,26 @@ public void testIndexWithShadowReplicasCleansUp() throws Exception {\n Path dataPath = createTempDir();\n Settings nodeSettings = nodeSettings(dataPath);\n \n- int nodeCount = randomIntBetween(2, 5);\n- internalCluster().startNodesAsync(nodeCount, nodeSettings).get();\n- String IDX = \"test\";\n+ final int nodeCount = randomIntBetween(2, 5);\n+ logger.info(\"--> starting {} nodes\", nodeCount);\n+ final List<String> nodes = internalCluster().startNodesAsync(nodeCount, nodeSettings).get();\n+ final String IDX = \"test\";\n+ final Tuple<Integer, Integer> numPrimariesAndReplicas = randomPrimariesAndReplicas(nodeCount);\n+ final int numPrimaries = numPrimariesAndReplicas.v1();\n+ final int numReplicas = numPrimariesAndReplicas.v2();\n+ logger.info(\"--> creating index {} with {} primary shards and {} replicas\", IDX, numPrimaries, numReplicas);\n \n Settings idxSettings = Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(1, nodeCount - 1))\n- .put(IndexMetaData.SETTING_DATA_PATH, dataPath.toAbsolutePath().toString())\n- .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n- .put(IndexMetaData.SETTING_SHARED_FILESYSTEM, true)\n- .build();\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numPrimaries)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, numReplicas)\n+ .put(IndexMetaData.SETTING_DATA_PATH, dataPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n+ .put(IndexMetaData.SETTING_SHARED_FILESYSTEM, true)\n+ .build();\n \n prepareCreate(IDX).setSettings(idxSettings).addMapping(\"doc\", \"foo\", \"type=text\").get();\n ensureGreen(IDX);\n+\n client().prepareIndex(IDX, \"doc\", \"1\").setSource(\"foo\", \"bar\").get();\n client().prepareIndex(IDX, \"doc\", \"2\").setSource(\"foo\", \"bar\").get();\n flushAndRefresh(IDX);\n@@ -570,9 +583,13 @@ public void testIndexWithShadowReplicasCleansUp() throws Exception {\n SearchResponse resp = client().prepareSearch(IDX).setQuery(matchAllQuery()).get();\n assertHitCount(resp, 2);\n \n+ logger.info(\"--> deleting index \" + IDX);\n assertAcked(client().admin().indices().prepareDelete(IDX));\n \n assertPathHasBeenCleared(dataPath);\n+ //norelease\n+ //TODO: uncomment the test below when https://github.com/elastic/elasticsearch/issues/17695 is resolved.\n+ //assertIndicesDirsDeleted(nodes);\n }\n \n /**\n@@ -583,7 +600,7 @@ public void testShadowReplicaNaturalRelocation() throws Exception {\n Path dataPath = createTempDir();\n Settings nodeSettings = nodeSettings(dataPath);\n \n- internalCluster().startNodesAsync(2, nodeSettings).get();\n+ final List<String> nodes = internalCluster().startNodesAsync(2, nodeSettings).get();\n String IDX = \"test\";\n \n Settings idxSettings = Settings.builder()\n@@ -608,6 +625,7 @@ public void testShadowReplicaNaturalRelocation() throws Exception {\n // start a third node, with 5 shards each on the other nodes, they\n // should relocate some to the third node\n final String node3 = internalCluster().startNode(nodeSettings);\n+ nodes.add(node3);\n \n assertBusy(new Runnable() {\n @Override\n@@ -630,6 +648,9 @@ public void run() {\n assertAcked(client().admin().indices().prepareDelete(IDX));\n \n assertPathHasBeenCleared(dataPath);\n+ //norelease\n+ //TODO: uncomment the test below when https://github.com/elastic/elasticsearch/issues/17695 is resolved.\n+ //assertIndicesDirsDeleted(nodes);\n }\n \n public void testShadowReplicasUsingFieldData() throws Exception {\n@@ -779,49 +800,104 @@ public void testIndexOnSharedFSRecoversToAnyNode() throws Exception {\n \n public void testDeletingClosedIndexRemovesFiles() throws Exception {\n Path dataPath = createTempDir();\n- Path dataPath2 = createTempDir();\n Settings nodeSettings = nodeSettings(dataPath.getParent());\n \n- internalCluster().startNodesAsync(2, nodeSettings).get();\n- String IDX = \"test\";\n- String IDX2 = \"test2\";\n+ final int numNodes = randomIntBetween(2, 5);\n+ logger.info(\"--> starting {} nodes\", numNodes);\n+ final List<String> nodes = internalCluster().startNodesAsync(numNodes, nodeSettings).get();\n+ final String IDX = \"test\";\n+ final Tuple<Integer, Integer> numPrimariesAndReplicas = randomPrimariesAndReplicas(numNodes);\n+ final int numPrimaries = numPrimariesAndReplicas.v1();\n+ final int numReplicas = numPrimariesAndReplicas.v2();\n+ logger.info(\"--> creating index {} with {} primary shards and {} replicas\", IDX, numPrimaries, numReplicas);\n \n+ assert numPrimaries > 0;\n+ assert numReplicas >= 0;\n Settings idxSettings = Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 5)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numPrimaries)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, numReplicas)\n .put(IndexMetaData.SETTING_DATA_PATH, dataPath.toAbsolutePath().toString())\n .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n .put(IndexMetaData.SETTING_SHARED_FILESYSTEM, true)\n .build();\n- Settings idx2Settings = Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 5)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)\n- .put(IndexMetaData.SETTING_DATA_PATH, dataPath2.toAbsolutePath().toString())\n- .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n- .put(IndexMetaData.SETTING_SHARED_FILESYSTEM, true)\n- .build();\n \n prepareCreate(IDX).setSettings(idxSettings).addMapping(\"doc\", \"foo\", \"type=text\").get();\n- prepareCreate(IDX2).setSettings(idx2Settings).addMapping(\"doc\", \"foo\", \"type=text\").get();\n- ensureGreen(IDX, IDX2);\n+ ensureGreen(IDX);\n \n int docCount = randomIntBetween(10, 100);\n List<IndexRequestBuilder> builders = new ArrayList<>();\n for (int i = 0; i < docCount; i++) {\n builders.add(client().prepareIndex(IDX, \"doc\", i + \"\").setSource(\"foo\", \"bar\"));\n- builders.add(client().prepareIndex(IDX2, \"doc\", i + \"\").setSource(\"foo\", \"bar\"));\n }\n indexRandom(true, true, true, builders);\n- flushAndRefresh(IDX, IDX2);\n+ flushAndRefresh(IDX);\n \n logger.info(\"--> closing index {}\", IDX);\n client().admin().indices().prepareClose(IDX).get();\n+ ensureGreen(IDX);\n \n- logger.info(\"--> deleting non-closed index\");\n- client().admin().indices().prepareDelete(IDX2).get();\n- assertPathHasBeenCleared(dataPath2);\n logger.info(\"--> deleting closed index\");\n client().admin().indices().prepareDelete(IDX).get();\n+\n assertPathHasBeenCleared(dataPath);\n+ assertIndicesDirsDeleted(nodes);\n+ }\n+\n+ public void testNodeJoinsWithoutShadowReplicaConfigured() throws Exception {\n+ Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n+\n+ internalCluster().startNodesAsync(2, nodeSettings).get();\n+ String IDX = \"test\";\n+\n+ Settings idxSettings = Settings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 2)\n+ .put(IndexMetaData.SETTING_DATA_PATH, dataPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n+ .put(IndexMetaData.SETTING_SHARED_FILESYSTEM, true)\n+ .build();\n+\n+ prepareCreate(IDX).setSettings(idxSettings).addMapping(\"doc\", \"foo\", \"type=text\").get();\n+ ensureYellow(IDX);\n+\n+ client().prepareIndex(IDX, \"doc\", \"1\").setSource(\"foo\", \"bar\").get();\n+ client().prepareIndex(IDX, \"doc\", \"2\").setSource(\"foo\", \"bar\").get();\n+ flushAndRefresh(IDX);\n+\n+ internalCluster().startNodesAsync(1).get();\n+ ensureYellow(IDX);\n+\n+ final ClusterHealthResponse clusterHealth = client().admin().cluster()\n+ .prepareHealth()\n+ .setWaitForEvents(Priority.LANGUID)\n+ .execute()\n+ .actionGet();\n+ assertThat(clusterHealth.getNumberOfNodes(), equalTo(3));\n+ // the new node is not configured for a shadow replica index, so no shards should have been assigned to it\n+ assertThat(clusterHealth.getStatus(), equalTo(ClusterHealthStatus.YELLOW));\n+ }\n+\n+ private static void assertIndicesDirsDeleted(final List<String> nodes) throws IOException {\n+ for (String node : nodes) {\n+ final NodeEnvironment nodeEnv = internalCluster().getInstance(NodeEnvironment.class, node);\n+ assertThat(nodeEnv.availableIndexFolders(), equalTo(Collections.emptySet()));\n+ }\n }\n+\n+ private static Tuple<Integer, Integer> randomPrimariesAndReplicas(final int numNodes) {\n+ final int numPrimaries;\n+ final int numReplicas;\n+ if (randomBoolean()) {\n+ // test with some nodes having no shards\n+ numPrimaries = 1;\n+ numReplicas = randomIntBetween(0, numNodes - 2);\n+ } else {\n+ // test with all nodes having at least one shard\n+ numPrimaries = randomIntBetween(1, 5);\n+ numReplicas = numNodes - 1;\n+ }\n+ return Tuple.tuple(numPrimaries, numReplicas);\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java", "status": "modified" }, { "diff": "@@ -34,10 +34,16 @@\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.elasticsearch.test.IndexSettingsModule;\n \n+import java.io.IOException;\n+import java.nio.file.Path;\n+import java.util.concurrent.BrokenBarrierException;\n+import java.util.concurrent.CyclicBarrier;\n import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicInteger;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.hamcrest.Matchers.equalTo;\n \n public class IndicesServiceTests extends ESSingleNodeTestCase {\n \n@@ -63,8 +69,13 @@ public void testCanDeleteIndexContent() {\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 4))\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(0, 3))\n .build());\n- assertFalse(\"shard on shared filesystem\", indicesService.canDeleteIndexContents(idxSettings.getIndex(), idxSettings, false));\n- assertTrue(\"shard on shared filesystem and closed\", indicesService.canDeleteIndexContents(idxSettings.getIndex(), idxSettings, true));\n+ assertFalse(\"shard on shared filesystem\", indicesService.canDeleteIndexContents(idxSettings.getIndex(), idxSettings));\n+\n+ final IndexMetaData.Builder newIndexMetaData = IndexMetaData.builder(idxSettings.getIndexMetaData());\n+ newIndexMetaData.state(IndexMetaData.State.CLOSE);\n+ idxSettings = IndexSettingsModule.newIndexSettings(newIndexMetaData.build());\n+ assertTrue(\"shard on shared filesystem, but closed, so it should be deletable\",\n+ indicesService.canDeleteIndexContents(idxSettings.getIndex(), idxSettings));\n }\n \n public void testCanDeleteShardContent() {\n@@ -81,7 +92,8 @@ public void testCanDeleteShardContent() {\n test.removeShard(0, \"boom\");\n assertTrue(\"shard is removed\", indicesService.canDeleteShardContent(shardId, test.getIndexSettings()));\n ShardId notAllocated = new ShardId(test.index(), 100);\n- assertFalse(\"shard that was never on this node should NOT be deletable\", indicesService.canDeleteShardContent(notAllocated, test.getIndexSettings()));\n+ assertFalse(\"shard that was never on this node should NOT be deletable\",\n+ indicesService.canDeleteShardContent(notAllocated, test.getIndexSettings()));\n }\n \n public void testDeleteIndexStore() throws Exception {\n@@ -92,7 +104,7 @@ public void testDeleteIndexStore() throws Exception {\n assertTrue(test.hasShard(0));\n \n try {\n- indicesService.deleteIndexStore(\"boom\", firstMetaData, clusterService.state(), false);\n+ indicesService.deleteIndexStore(\"boom\", firstMetaData, clusterService.state());\n fail();\n } catch (IllegalStateException ex) {\n // all good\n@@ -119,7 +131,7 @@ public void testDeleteIndexStore() throws Exception {\n assertTrue(path.exists());\n \n try {\n- indicesService.deleteIndexStore(\"boom\", secondMetaData, clusterService.state(), false);\n+ indicesService.deleteIndexStore(\"boom\", secondMetaData, clusterService.state());\n fail();\n } catch (IllegalStateException ex) {\n // all good\n@@ -129,7 +141,7 @@ public void testDeleteIndexStore() throws Exception {\n \n // now delete the old one and make sure we resolve against the name\n try {\n- indicesService.deleteIndexStore(\"boom\", firstMetaData, clusterService.state(), false);\n+ indicesService.deleteIndexStore(\"boom\", firstMetaData, clusterService.state());\n fail();\n } catch (IllegalStateException ex) {\n // all good\n@@ -187,4 +199,54 @@ public void testPendingTasks() throws Exception {\n assertAcked(client().admin().indices().prepareOpen(\"test\"));\n \n }\n+\n+ @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/17695\")\n+ public void testDeletingSameIndexDirectoryFromConcurrentProcesses() throws Exception {\n+ final Path dataPath = createTempDir();\n+ final String indexName = \"test\";\n+ final Settings settings = Settings.builder()\n+ .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n+ // so we delete custom location as well\n+ .put(IndexMetaData.SETTING_DATA_PATH, dataPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 4))\n+ .build();\n+ final IndexService idxService = createIndex(indexName, settings);\n+ client().prepareIndex(indexName, \"type\", \"1\").setSource(\"field\", \"value\").setRefresh(true).get();\n+ client().admin().indices().prepareFlush(indexName).get();\n+ assertHitCount(client().prepareSearch(indexName).get(), 1);\n+ client().admin().indices().prepareClose(indexName).execute().get();\n+\n+ final int numSimulatedNodes = randomIntBetween(2, 20);\n+ final NodeEnvironment nodeEnv = getInstanceFromNode(NodeEnvironment.class);\n+ final CyclicBarrier barrier = new CyclicBarrier(numSimulatedNodes + 1); // extra one because the current thread waits too\n+ final AtomicInteger errorCount = new AtomicInteger(0);\n+ for (int i = 0; i < numSimulatedNodes; i++) {\n+ final Thread thread = new Thread(() -> {\n+ try {\n+ try {\n+ barrier.await();\n+ nodeEnv.deleteIndexDirectoryUnderLock(idxService.index(), idxService.getIndexSettings());\n+ } catch (IOException e) {\n+ // a race condition in deleting the index directory caused file not found exceptions\n+ errorCount.incrementAndGet();\n+ } catch (BrokenBarrierException | InterruptedException e) {\n+ throw new AssertionError(e);\n+ }\n+ } finally {\n+ try {\n+ barrier.await();\n+ } catch (BrokenBarrierException | InterruptedException e) {\n+ throw new AssertionError(e);\n+ }\n+ }\n+ });\n+ thread.start();\n+ }\n+\n+ // wait for all threads to be ready\n+ barrier.await();\n+ // wait for all threads to finish\n+ barrier.await();\n+ assertThat(errorCount.get(), equalTo(0));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesServiceTests.java", "status": "modified" }, { "diff": "@@ -65,4 +65,13 @@ public static IndexSettings newIndexSettings(Index index, Settings settings, Set\n }\n return new IndexSettings(metaData, Settings.EMPTY, (idx) -> Regex.simpleMatch(idx, metaData.getIndex().getName()), new IndexScopedSettings(Settings.EMPTY, settingSet));\n }\n+\n+ public static IndexSettings newIndexSettings(final IndexMetaData indexMetaData, Setting<?>... setting) {\n+ Set<Setting<?>> settingSet = new HashSet<>(IndexScopedSettings.BUILT_IN_INDEX_SETTINGS);\n+ if (setting.length > 0) {\n+ settingSet.addAll(Arrays.asList(setting));\n+ }\n+ return new IndexSettings(indexMetaData, Settings.EMPTY, (idx) -> Regex.simpleMatch(idx, indexMetaData.getIndex().getName()),\n+ new IndexScopedSettings(Settings.EMPTY, settingSet));\n+ }\n }", "filename": "test/framework/src/main/java/org/elasticsearch/test/IndexSettingsModule.java", "status": "modified" } ] }
{ "body": "This ticket is meant to capture an issue which was discovered as part of the work done in #7493 , which contains a [failing reproduction test](https://github.com/elasticsearch/elasticsearch/blob/596a4a073584c4262d574828c9caea35b5ed1de5/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptions.java#L375) with @awaitFix.\n\nIf a network partition separates a node from the master, there is some window of time before the node detects it. The length of the window is dependent on the type of the partition. This window is extremely small if a socket is broken. More adversarial partitions, for example, silently dropping requests without breaking the socket can take longer (up to 3x30s using current defaults).\n\nIf the node hosts a _primary_ shard at the moment of partition, and ends up being isolated from the cluster (which could have resulted in Split Brain before), some documents that are being indexed into the primary _may_ be lost if they fail to reach one of the allocated replicas (due to the partition) and that replica is later promoted to primary by the master.\n", "comments": [ { "body": "I am curious to learn what your current thinking on fixing the issue is. I believe so long as we are ensuring the write is acknowledged by `WriteConsistencyLevel.QUORUM` or `WriteConsistencyLevel.ALL`, the problem should not theoretically happen. This seems to be what `TransportShardReplicationOperationAction` is aiming at, but may be buggy?\n\nAs an aside, can you point me at the primary-selection logic used by Elasticsearch?\n", "created_at": "2014-10-21T06:54:44Z" }, { "body": "@shikhar the write consistency check works at the moment based of the cluster state of the node that hosts the primary. That means that it can take some time (again, when the network is just dropping requests, socket disconnects are quick) before the master detects a node does not respond to pings and removes it from the cluster states (or that a node detects it's not connected to a master). The first step is improving transparency w.r.t replica shards indexing errors (see #7994). That will help expose when a document was not successfully indexed to all replicas. After that we plan to continue with improving primary shard promotion. Current code is here: https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java#L271\n", "created_at": "2014-10-21T08:15:04Z" }, { "body": "Ah I see, my thinking was that the WCL check be verified _both_ before and after the write has been sent. The after is what really matters. So it seems you are suggesting that the responsibility of verifying how many replicas a write was acknowledged by, will be borne by the requestor? I think the terminology around \"write consistency level\" check may have to be re-considered then!\n\nFrom the primary selection logic I can't spot anywhere where it's trying to pick the most \"recent\" replica of the candidates. Does ES currently exercise any such preference?\n", "created_at": "2014-10-21T19:26:25Z" }, { "body": "> So it seems you are suggesting that the responsibility of verifying how many replicas a write was acknowledged by, will be borne by the requestor? \n\nThe PR I mentioned is just a first step to bring more transparency into the process, by no means the goal. \n\n> From the primary selection logic I can't spot anywhere where it's trying to pick the most \"recent\" replica of the candidates. Does ES currently exercise any such preference?\n\n\"recent\" is very tricky when you index concurrently different documents of different sizes on different nodes. Depending on how things run, there is no notion of a clear \"recent\" shard as each replica may be behind on different documents, all in flight. I currently have some thoughts on how to approach this better but it's early stages. One of the options is take make a intermediate step which will indeed involve some heuristic around \"recency\".\n", "created_at": "2014-10-21T19:35:03Z" }, { "body": "> \"recent\" is very tricky when you index concurrently different documents of different sizes on different nodes. Depending on how things run, there is no notion of a clear \"recent\" shard as each replica may be behind on different documents, all in flight. I currently have some thoughts on how to approach this better but it's early stages. One of the options is take make a intermediate step which will indeed involve some heuristic around \"recency\".\n\nAgreed that it's tricky. \n\nIt seems to me that what's required is a shard-specific monotonic counter, and since all writes go through the primary this can be safely implemented. Is this blocking on the \"sequence ID\" stuff I think I saw some talk of? Is there a ticket for that?\n", "created_at": "2014-10-21T20:16:10Z" }, { "body": "> It seems to me that what's required is a shard-specific monotonic counter, and since all writes go through the primary this can be safely implemented. Is this blocking on the \"sequence ID\" stuff I think I saw some talk of? \n\nYou read our minds :)\n", "created_at": "2014-10-21T20:27:14Z" }, { "body": "[recommendation](https://twitter.com/aphyr/status/524599768526233601) from @aphyr for this problem: viewstamped replication\n", "created_at": "2014-10-21T20:28:30Z" }, { "body": "Or Paxos, or ZAB, or Raft, or ...\n", "created_at": "2014-10-21T20:39:54Z" }, { "body": "Chiming with a related note that I mentioned on the mailing list (@shikhar linked me here) re: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/elasticsearch/M17mgdZnikk/Vk5lVIRjIFAJ. This is failure mode that can happen without a network partition... just crashing nodes (which you can easily get with some long GC pauses) \n\n## \n\nI think the monotonic counters are a good solution to this, but only if they count something that indicates not only state (The next document inserted to the shard should be document 1000), but also size (which implies that I have 999 documents in my copy of the shard). This way, if you end up in a position where a partially-replicated shard is promoted to master (because it has the only copy of the shard remaining in the cluster), you can now offer the user some interesting cluster configuration options: \n\n1) serve the data I have, but accept no writes/updates (until a `full` shard returns to the cluster)\n2) temporarily close the index / 500 error (until a `full` shard returns to the cluster)\n3) promote what I have to master (and re-replicate my copy to other nodes when they re-join the cluster)\n\nWithout _knowing_ that a shard is in this \"partial-data\" state, you couldn't make the choice. I would personally choose #1 most of the time, but I can see use cases for all three options. I would argue that #3 is what is happening presently. While this would add overhead to each write/update (you would need to count the number of documents in the shard EACH write), I think that allowing ES to run in this \"more safe\" mode is a good option. Hopefully the suggestion isn't too crazy, as this would only add a check on the local copy of the data, and we probably only need to do it on the master shard. \n", "created_at": "2014-10-24T06:50:14Z" }, { "body": "> 3) promote what I have to master (and re-replicate my copy to other nodes when they re-join the cluster)\n\nThere's some [great literature that addresses this problem](http://web.stanford.edu/class/cs347/reading/zab.pdf).\n", "created_at": "2014-10-24T20:47:32Z" }, { "body": "@evantahler \n\n> This way, if you end up in a position where a partially-replicated shard is promoted to master (because it has the only copy of the shard remaining in the cluster)\n\nThis should never happen. ES prefers to go to red state and block indexing to promoting half copies to primaries. If it did it is a major bug and I would request you open another issue about it (this one is about something else). \n", "created_at": "2014-10-24T21:27:40Z" }, { "body": "linking #10708 \n", "created_at": "2015-05-05T17:57:30Z" }, { "body": "Since this issue is related to in-flight documents. Do you think there is a risk to loose existing document during primary shard relocation (cluster rebalancing after adding a new node for instance )?\n", "created_at": "2015-07-13T09:45:59Z" }, { "body": "@JeanFrancoisContour this issue relates to documents that are wrongfully acked. I.e., ES acknowledge them but they didn't really reach all the replicas. They are lost when the primary is removed in favour of one of the other replica due to a network partition that isolates the primary. It should effect primary relocation. If you have issues there do please report by opening a different ticket.\n", "created_at": "2015-07-16T09:34:06Z" }, { "body": "Ok thanks, so if we can afford to send data twice (same _id), in real time for the first event and a few hour later (bulk) for the second try, we are pretty confident in ES overall ?\n", "created_at": "2015-07-16T17:05:54Z" }, { "body": "For the record, the majority of the work to fix this can be found at #14252\n", "created_at": "2016-04-07T07:55:47Z" } ], "number": 7572, "title": "[Indexing] A network partition can cause in flight documents to be lost" }
{ "body": "#14252 , #7572 , #15900, #12573, #14671, #15281 and #9126 have all been closed/merged and will be part of 5.0.0.\n", "number": 17586, "review_comments": [], "title": "Update resliency page" }
{ "commits": [ { "message": "Update resliency page\n\n#14252 , #7572 , #15900, #12573, #14671, #15281 and #9126 have all been closed/merged and will be part of 5.0.0." } ], "files": [ { "diff": "@@ -94,16 +94,35 @@ space. The following issues have been identified:\n \n Other safeguards are tracked in the meta-issue {GIT}11511[#11511].\n \n+\n+[float]\n+=== Relocating shards omitted by reporting infrastructure (STATUS: ONGOING)\n+\n+Indices stats and indices segments requests reach out to all nodes that have shards of that index. Shards that have relocated from a node\n+while the stats request arrives will make that part of the request fail and are just ignored in the overall stats result. {GIT}13719[#13719]\n+\n+[float]\n+=== Jepsen Test Failures (STATUS: ONGOING)\n+\n+We have increased our test coverage to include scenarios tested by Jepsen. We make heavy use of randomization to expand on the scenarios that can be tested and to introduce new error conditions. You can follow the work on the master branch of the https://github.com/elastic/elasticsearch/blob/master/core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java[`DiscoveryWithServiceDisruptionsIT` class], where we will add more tests as time progresses.\n+\n+[float]\n+=== Document guarantees and handling of failure (STATUS: ONGOING)\n+\n+This status page is a start, but we can do a better job of explicitly documenting the processes at work in Elasticsearch, and what happens in the case of each type of failure. The plan is to have a test case that validates each behavior under simulated conditions. Every test will document the expected results, the associated test code and an explicit PASS or FAIL status for each simulated case.\n+\n+== Unreleased\n+\n [float]\n-=== Loss of documents during network partition (STATUS: ONGOING)\n+=== Loss of documents during network partition (STATUS: UNRELEASED, v5.0.0)\n \n If a network partition separates a node from the master, there is some window of time before the node detects it. The length of the window is dependent on the type of the partition. This window is extremely small if a socket is broken. More adversarial partitions, for example, silently dropping requests without breaking the socket can take longer (up to 3x30s using current defaults).\n \n If the node hosts a primary shard at the moment of partition, and ends up being isolated from the cluster (which could have resulted in {GIT}2488[split-brain] before), some documents that are being indexed into the primary may be lost if they fail to reach one of the allocated replicas (due to the partition) and that replica is later promoted to primary by the master ({GIT}7572[#7572]).\n To prevent this situation, the primary needs to wait for the master to acknowledge replica shard failures before acknowledging the write to the client. {GIT}14252[#14252]\n \n [float]\n-=== Safe primary relocations (STATUS: ONGOING)\n+=== Safe primary relocations (STATUS: UNRELEASED, v5.0.0)\n \n When primary relocation completes, a cluster state is propagated that deactivates the old primary and marks the new primary as active. As\n cluster state changes are not applied synchronously on all nodes, there can be a time interval where the relocation target has processed the\n@@ -117,23 +136,7 @@ on the relocation target, each of the nodes believes the other to be the active\n chasing the primary being quickly sent back and forth between the nodes, potentially making them both go OOM. {GIT}12573[#12573]\n \n [float]\n-=== Relocating shards omitted by reporting infrastructure (STATUS: ONGOING)\n-\n-Indices stats and indices segments requests reach out to all nodes that have shards of that index. Shards that have relocated from a node\n-while the stats request arrives will make that part of the request fail and are just ignored in the overall stats result. {GIT}13719[#13719]\n-\n-[float]\n-=== Jepsen Test Failures (STATUS: ONGOING)\n-\n-We have increased our test coverage to include scenarios tested by Jepsen. We make heavy use of randomization to expand on the scenarios that can be tested and to introduce new error conditions. You can follow the work on the master branch of the https://github.com/elastic/elasticsearch/blob/master/core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java[`DiscoveryWithServiceDisruptionsIT` class], where we will add more tests as time progresses.\n-\n-[float]\n-=== Document guarantees and handling of failure (STATUS: ONGOING)\n-\n-This status page is a start, but we can do a better job of explicitly documenting the processes at work in Elasticsearch, and what happens in the case of each type of failure. The plan is to have a test case that validates each behavior under simulated conditions. Every test will document the expected results, the associated test code and an explicit PASS or FAIL status for each simulated case.\n-\n-[float]\n-=== Do not allow stale shards to automatically be promoted to primary (STATUS: ONGOING, v5.0.0)\n+=== Do not allow stale shards to automatically be promoted to primary (STATUS: UNRELEASED, v5.0.0)\n \n In some scenarios, after the loss of all valid copies, a stale replica shard can be automatically assigned as a primary, preferring old data\n to no data at all ({GIT}14671[#14671]). This can lead to a loss of acknowledged writes if the valid copies are not lost but are rather\n@@ -143,7 +146,7 @@ for one of the good shard copies to reappear. In case where all good copies are\n stale shard copy.\n \n [float]\n-=== Make index creation resilient to index closing and full cluster crashes (STATUS: ONGOING, v5.0.0)\n+=== Make index creation resilient to index closing and full cluster crashes (STATUS: UNRELEASED, v5.0.0)\n \n Recovering an index requires a quorum (with an exception for 2) of shard copies to be available to allocate a primary. This means that\n a primary cannot be assigned if the cluster dies before enough shards have been allocated ({GIT}9126[#9126]). The same happens if an index\n@@ -153,7 +156,6 @@ recover an index in the presence of a single shard copy. Allocation IDs can also\n but none of the shards have been started. If such an index was inadvertently closed before at least one shard could be started, a fresh\n shard will be allocated upon reopening the index.\n \n-== Unreleased\n \n [float]\n === Use two phase commit for Cluster State publishing (STATUS: UNRELEASED, v5.0.0)", "filename": "docs/resiliency/index.asciidoc", "status": "modified" } ] }
{ "body": "This issue reproduces in ES 2.3.\n\n``` http\nPOST /my-index/_bulk\n{\"index\":{\"_type\":\"type1\"}}\n{\"title\":\"I am just a string\"}\n{\"index\":{\"_type\":\"type2\"}}\n{\"title\":{\"field\":\"I am a string in an object\"}}\n\nPOST /my-index/_close\nPOST /my-index/_open\n```\n\nAfter opening the index, you'll get something like this in the logs:\n\n```\n[2016-04-06 10:24:20,677][WARN ][cluster.action.shard ] [Anthropomorpho] [my-index][0] received shard failed for target shard [[my-index][0], node[dC8bCuUMRHWqd0dmyqxgwg], [P], v[88], s[INITIALIZING], a[id=HdFXs5PdQI6iT4G5HFyF3w], unassigned_info[[reason=ALLOCATION_FAILED], at[2016-04-06T14:24:20.505Z], details[failed to update mappings, failure IllegalArgumentException[Field [title] is defined as a field in mapping [type1] but this name is already used for an object in other types]]]], indexUUID [LiIeIKE-RvSNnfaBUi9NWw], message [failed to update mappings], failure [IllegalArgumentException[Field [title] is defined as a field in mapping [type1] but this name is already used for an object in other types]]\njava.lang.IllegalArgumentException: Field [title] is defined as a field in mapping [type1] but this name is already used for an object in other types\n at org.elasticsearch.index.mapper.MapperService.checkObjectsCompatibility(MapperService.java:473)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:336)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:289)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:387)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:348)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:164)\n at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:610)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n```\n\nRelates to #15243\n", "comments": [ { "body": "Closed by #17568\n", "created_at": "2016-04-07T08:37:16Z" }, { "body": "I got this bug on production today, with mapping creation based on templates (logstash :heart:).\n\nI had a field which was a string in 99% of my logs, and another \"type\", only getting event once an hour, with this field as an object - I didn't notice, as everything was working smoothly for weeks. Until we decided to install Marvel :neckbeard: \n\nMarvel is fine, but we restarded the cluster to start the agent... our 70+ shards were not able to start anymore! IllegalArgumentException on all shards, about this field not mapped properly.\n\nI had to close all the indexes, no more logs history :sob: and since 2.0 removed the possibility to delete a Type, no possibility to clean up those bad indexes, I'm stuck with gigabyte of data I can't use because shard won't boot. Anyway.\n\nGlad this is now fixed, and waiting for ES 2.3.2 :neckbeard: :sparkling_heart: \n", "created_at": "2016-04-14T18:07:13Z" }, { "body": "Is there any way to recovery the data of an specific type inside of such an index?\n", "created_at": "2016-07-26T12:32:53Z" } ], "number": 17567, "title": "Conflicted Mapping causes problems during shard initialization" }
{ "body": "Today we fail if you try to add a field and an object from another type already\nhas the same name. However, we do NOT fail if you insert the field first and the\nobject afterwards. This leads to bad bugs since mappings are not necessarily\nparsed in the same order at recovery time, so a mapping update could succeed and\nthen you would fail to reopen the index.\n\nCloses #17567\n", "number": 17568, "review_comments": [ { "body": "I removed it because we already call checkFieldUniqueness from merge(), which also callo checkObjectsCompatibility.\n", "created_at": "2016-04-06T17:38:04Z" } ], "title": "Fail if an object is added after a field with the same name." }
{ "commits": [ { "message": "Fail if an object is added after a field with the same name. #17568\n\nToday we fail if you try to add a field and an object from another type already\nhas the same name. However, we do NOT fail if you insert the field first and the\nobject afterwards. This leads to bad bugs since mappings are not necessarily\nparsed in the same order at recovery time, so a mapping update could succeed and\nthen you would fail to reopen the index." } ], "files": [ { "diff": "@@ -361,6 +361,9 @@ private boolean assertSerialization(DocumentMapper mapper) {\n }\n \n private void checkFieldUniqueness(String type, Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers) {\n+ assert Thread.holdsLock(this);\n+\n+ // first check within mapping\n final Set<String> objectFullNames = new HashSet<>();\n for (ObjectMapper objectMapper : objectMappers) {\n final String fullPath = objectMapper.fullPath();\n@@ -378,13 +381,26 @@ private void checkFieldUniqueness(String type, Collection<ObjectMapper> objectMa\n throw new IllegalArgumentException(\"Field [\" + name + \"] is defined twice in [\" + type + \"]\");\n }\n }\n+\n+ // then check other types\n+ for (String fieldName : fieldNames) {\n+ if (fullPathObjectMappers.containsKey(fieldName)) {\n+ throw new IllegalArgumentException(\"[\" + fieldName + \"] is defined as a field in mapping [\" + type\n+ + \"] but this name is already used for an object in other types\");\n+ }\n+ }\n+\n+ for (String objectPath : objectFullNames) {\n+ if (fieldTypes.get(objectPath) != null) {\n+ throw new IllegalArgumentException(\"[\" + objectPath + \"] is defined as an object in mapping [\" + type\n+ + \"] but this name is already used for a field in other types\");\n+ }\n+ }\n }\n \n private void checkObjectsCompatibility(String type, Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers, boolean updateAllTypes) {\n assert Thread.holdsLock(this);\n \n- checkFieldUniqueness(type, objectMappers, fieldMappers);\n-\n for (ObjectMapper newObjectMapper : objectMappers) {\n ObjectMapper existingObjectMapper = fullPathObjectMappers.get(newObjectMapper.fullPath());\n if (existingObjectMapper != null) {\n@@ -393,12 +409,6 @@ private void checkObjectsCompatibility(String type, Collection<ObjectMapper> obj\n existingObjectMapper.merge(newObjectMapper, updateAllTypes);\n }\n }\n-\n- for (FieldMapper fieldMapper : fieldMappers) {\n- if (fullPathObjectMappers.containsKey(fieldMapper.name())) {\n- throw new IllegalArgumentException(\"Field [\" + fieldMapper.name() + \"] is defined as a field in mapping [\" + type + \"] but this name is already used for an object in other types\");\n- }\n- }\n }\n \n private void checkNestedFieldsLimit(Map<String, ObjectMapper> fullPathObjectMappers) {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.MapperService.MergeReason;\n import org.elasticsearch.index.mapper.core.LongFieldMapper;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n@@ -304,4 +305,37 @@ public void testDefaultApplied() throws IOException {\n assertNotNull(response.getMappings().get(\"test2\").get(\"type\").getSourceAsMap().get(\"_timestamp\"));\n assertTrue((Boolean)((LinkedHashMap)response.getMappings().get(\"test2\").get(\"type\").getSourceAsMap().get(\"_timestamp\")).get(\"enabled\"));\n }\n+\n+ public void testRejectFieldDefinedTwice() throws IOException {\n+ String mapping1 = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"object\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+ String mapping2 = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type2\")\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"long\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ MapperService mapperService1 = createIndex(\"test1\").mapperService();\n+ mapperService1.merge(\"type1\", new CompressedXContent(mapping1), MergeReason.MAPPING_UPDATE, false);\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> mapperService1.merge(\"type2\", new CompressedXContent(mapping2), MergeReason.MAPPING_UPDATE, false));\n+ assertThat(e.getMessage(), equalTo(\"[foo] is defined as a field in mapping [type2\"\n+ + \"] but this name is already used for an object in other types\"));\n+\n+ MapperService mapperService2 = createIndex(\"test2\").mapperService();\n+ mapperService2.merge(\"type2\", new CompressedXContent(mapping2), MergeReason.MAPPING_UPDATE, false);\n+ e = expectThrows(IllegalArgumentException.class,\n+ () -> mapperService2.merge(\"type1\", new CompressedXContent(mapping1), MergeReason.MAPPING_UPDATE, false));\n+ assertThat(e.getMessage(), equalTo(\"[foo] is defined as an object in mapping [type1\"\n+ + \"] but this name is already used for a field in other types\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingTests.java", "status": "modified" }, { "diff": "@@ -663,7 +663,7 @@ public void testParsingExceptionIfFieldDoesNotExist() throws Exception {\n ensureYellow();\n int numDocs = 2;\n client().index(\n- indexRequest(\"test\").type(\"type1\").source(\n+ indexRequest(\"test\").type(\"type\").source(\n jsonBuilder().startObject().field(\"test\", \"value\").startObject(\"geo\").field(\"lat\", 1).field(\"lon\", 2).endObject()\n .endObject())).actionGet();\n refresh();\n@@ -674,7 +674,7 @@ public void testParsingExceptionIfFieldDoesNotExist() throws Exception {\n searchRequest().searchType(SearchType.QUERY_THEN_FETCH).source(\n searchSource()\n .size(numDocs)\n- .query(functionScoreQuery(termQuery(\"test\", \"value\"), linearDecayFunction(\"type1.geo\", lonlat, \"1000km\"))\n+ .query(functionScoreQuery(termQuery(\"test\", \"value\"), linearDecayFunction(\"type.geo\", lonlat, \"1000km\"))\n .scoreMode(FiltersFunctionScoreQuery.ScoreMode.MULTIPLY))));\n try {\n response.actionGet();", "filename": "core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: master\n\n**JVM version**: 1.8.0_74\n\n**OS version**: Win 10\n\n**Description of the problem including expected versus actual behavior**:\n\nMoving average throws a NPE when the optional `window` parameter isn't specified.\n\n**Steps to reproduce**:\n\n```\n{\n \"size\": 0,\n \"aggs\": {\n \"projects_started_per_month\": {\n \"date_histogram\": {\n \"field\": \"startedOn\",\n \"interval\": \"month\"\n },\n \"aggs\": {\n \"commits\": {\n \"sum\": {\n \"field\": \"numberOfCommits\"\n }\n },\n \"commits_moving_avg\": {\n \"moving_avg\": {\n \"buckets_path\": \"commits\",\n \"gap_policy\": \"insert_zeros\",\n \"model\": \"linear\"\n }\n }\n }\n }\n }\n}\n```\n\nResponse:\n\n```\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"null_pointer_exception\",\n \"reason\" : null\n } ],\n \"type\" : \"null_pointer_exception\",\n \"reason\" : null\n },\n \"status\" : 500\n}\n```\n\nChanging the above request and specifying a window returns the expected response.\n\n**Provide logs (if relevant)**:\n\n```\njava.lang.NullPointerException\n at org.elasticsearch.search.aggregations.pipeline.movavg.MovAvgParser.parse(MovAvgParser.java:166)\n at org.elasticsearch.search.aggregations.pipeline.movavg.MovAvgParser.parse(MovAvgParser.java:38)\n at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:204)\n at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:185)\n at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:109)\n at org.elasticsearch.search.builder.SearchSourceBuilder.parseXContent(SearchSourceBuilder.java:864)\n at org.elasticsearch.rest.action.search.RestSearchAction.parseSearchRequest(RestSearchAction.java:133)\n at org.elasticsearch.rest.action.search.RestSearchAction.handleRequest(RestSearchAction.java:95)\n at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:51)\n at org.elasticsearch.rest.RestController.executeHandler(RestController.java:214)\n at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:174)\n at org.elasticsearch.http.HttpServer.dispatchRequest(HttpServer.java:101)\n at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:487)\n at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:65)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:85)\n at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:83)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\n<!--\nIf you are filing a feature request, please remove the above bug\nreport block and provide responses for all of the below items.\n-->\n", "comments": [ { "body": "@gmarz thanks for raising this. I've opened a pull request to fix it: https://github.com/elastic/elasticsearch/pull/17556\n", "created_at": "2016-04-06T07:42:28Z" } ], "number": 17516, "title": "NullPointerException in moving average agg when window isn't specified" }
{ "body": "This PR fixes a bug where a NPE was thrown when parsing a moving average pipeline aggregation request which did not specify a window size.\n\nCloses #17516\n", "number": 17556, "review_comments": [], "title": "Fixes NPE when no window is specified in moving average request" }
{ "commits": [ { "message": "Aggregations: Fixes NPE when no window is specified in moving average request\n\nThis PR fixes a bug where a NPE was thrown when parsing a moving average pipeline aggregation request which did not specify a window size.\n\nCloses #17516" } ], "files": [ { "diff": "@@ -163,7 +163,7 @@ public MovAvgPipelineAggregatorBuilder parse(String pipelineAggregatorName, XCon\n \n MovAvgModel movAvgModel;\n try {\n- movAvgModel = modelParser.parse(settings, pipelineAggregatorName, window, context.parseFieldMatcher());\n+ movAvgModel = modelParser.parse(settings, pipelineAggregatorName, factory.window(), context.parseFieldMatcher());\n } catch (ParseException exception) {\n throw new ParsingException(parser.getTokenLocation(), \"Could not parse settings for model [\" + model + \"].\", exception);\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgParser.java", "status": "modified" }, { "diff": "@@ -102,9 +102,9 @@ protected static String[] getCurrentTypes() {\n \n private static NamedWriteableRegistry namedWriteableRegistry;\n \n- private static AggregatorParsers aggParsers;\n- private static ParseFieldMatcher parseFieldMatcher;\n- private static IndicesQueriesRegistry queriesRegistry;\n+ protected static AggregatorParsers aggParsers;\n+ protected static ParseFieldMatcher parseFieldMatcher;\n+ protected static IndicesQueriesRegistry queriesRegistry;\n \n protected abstract AF createTestAggregatorFactory();\n ", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/BasePipelineAggregationTestCase.java", "status": "modified" }, { "diff": "@@ -19,7 +19,11 @@\n \n package org.elasticsearch.search.aggregations.pipeline.moving.avg;\n \n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.search.aggregations.BasePipelineAggregationTestCase;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilder;\n import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy;\n import org.elasticsearch.search.aggregations.pipeline.movavg.MovAvgPipelineAggregatorBuilder;\n import org.elasticsearch.search.aggregations.pipeline.movavg.models.EwmaModel;\n@@ -92,4 +96,35 @@ protected MovAvgPipelineAggregatorBuilder createTestAggregatorFactory() {\n return factory;\n }\n \n+ public void testDefaultParsing() throws Exception {\n+ MovAvgPipelineAggregatorBuilder expected = new MovAvgPipelineAggregatorBuilder(\"commits_moving_avg\", \"commits\");\n+ String json = \"{\" +\n+ \" \\\"commits_moving_avg\\\": {\" +\n+ \" \\\"moving_avg\\\": {\" +\n+ \" \\\"buckets_path\\\": \\\"commits\\\"\" +\n+ \" }\" +\n+ \" }\" +\n+ \"}\";\n+ XContentParser parser = XContentFactory.xContent(json).createParser(json);\n+ QueryParseContext parseContext = new QueryParseContext(queriesRegistry);\n+ parseContext.reset(parser);\n+ parseContext.parseFieldMatcher(parseFieldMatcher);\n+ assertSame(XContentParser.Token.START_OBJECT, parser.nextToken());\n+ assertSame(XContentParser.Token.FIELD_NAME, parser.nextToken());\n+ assertEquals(expected.name(), parser.currentName());\n+ assertSame(XContentParser.Token.START_OBJECT, parser.nextToken());\n+ assertSame(XContentParser.Token.FIELD_NAME, parser.nextToken());\n+ assertEquals(expected.type(), parser.currentName());\n+ assertSame(XContentParser.Token.START_OBJECT, parser.nextToken());\n+ PipelineAggregatorBuilder<?> newAgg = aggParsers.pipelineAggregator(expected.getWriteableName()).parse(expected.name(), parser,\n+ parseContext);\n+ assertSame(XContentParser.Token.END_OBJECT, parser.currentToken());\n+ assertSame(XContentParser.Token.END_OBJECT, parser.nextToken());\n+ assertSame(XContentParser.Token.END_OBJECT, parser.nextToken());\n+ assertNull(parser.nextToken());\n+ assertNotNull(newAgg);\n+ assertNotSame(newAgg, expected);\n+ assertEquals(expected, newAgg);\n+ assertEquals(expected.hashCode(), newAgg.hashCode());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/moving/avg/MovAvgTests.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: master\n\n**JVM version**: 1.8.0_74\n\n**OS version**: Win 10\n\n**Description of the problem including expected versus actual behavior**:\n\nNot really a typical use case, but an issue nonetheless caught by our integration tests. Specifying no filter in a filter agg results in a NPE.\n\n**Steps to reproduce**:\n\n```\n{\n \"aggs\": {\n \"empty_filter\": {\n \"filter\": {}\n }\n }\n}\n```\n\nResponse:\n\n```\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"null_pointer_exception\",\n \"reason\" : null\n } ],\n \"type\" : \"search_phase_execution_exception\",\n \"reason\" : \"all shards failed\",\n \"phase\" : \"query\",\n \"grouped\" : true,\n \"failed_shards\" : [ {\n \"shard\" : 0,\n \"index\" : \"project\",\n \"node\" : \"9QsiljMeTtKbZ2FijGofvw\",\n \"reason\" : {\n \"type\" : \"null_pointer_exception\",\n \"reason\" : null\n }\n } ],\n \"caused_by\" : {\n \"type\" : \"null_pointer_exception\",\n \"reason\" : null\n }\n },\n \"status\" : 500\n}\n```\n\n**Provide logs (if relevant)**:\n\n```\nFailed to execute [org.elasticsearch.action.search.SearchRequest@3218e268]\nRemoteTransportException[[readonly-node-55e651][127.0.0.1:9300][indices:data/read/search[phase/query]]]; nested: NullPointerException;\nCaused by: java.lang.NullPointerException\n at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:684)\n at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:734)\n at org.elasticsearch.search.internal.ContextIndexSearcher.createNormalizedWeight(ContextIndexSearcher.java:107)\n at org.elasticsearch.search.aggregations.bucket.filter.FilterAggregatorFactory.<init>(FilterAggregatorFactory.java:46)\n at org.elasticsearch.search.aggregations.bucket.filter.FilterAggregatorBuilder.doBuild(FilterAggregatorBuilder.java:60)\n at org.elasticsearch.search.aggregations.AggregatorBuilder.build(AggregatorBuilder.java:120)\n at org.elasticsearch.search.aggregations.AggregatorFactories$Builder.build(AggregatorFactories.java:171)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:709)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:577)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:526)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:277)\n at org.elasticsearch.search.action.SearchTransportService$SearchQueryTransportHandler.messageReceived(SearchTransportService.java:369)\n at org.elasticsearch.search.action.SearchTransportService$SearchQueryTransportHandler.messageReceived(SearchTransportService.java:366)\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:65)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:468)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\n<!--\nIf you are filing a feature request, please remove the above bug\nreport block and provide responses for all of the below items.\n-->\n", "comments": [ { "body": "@gmarz Thanks for flagging this up. I have opened #17542 to fix this.\n", "created_at": "2016-04-05T15:18:13Z" } ], "number": 17518, "title": "Filter aggregation with no filter results in a NullPointerException" }
{ "body": "This fix ensures the filter and filters aggregation will not throw a NPE when `{}` is passed in as a filter. Instead `{}` is interpreted as a MatchAllDocsQuery.\n\nCloses #17518\n", "number": 17542, "review_comments": [], "title": "Fixes Filter and FiltersAggregation to work with empty query" }
{ "commits": [ { "message": "Aggregations: Fixes Filter and FiltersAggregation to work with empty query\n\nThis fix ensures the filter and filters aggregation will not throw a NPE when `{}` is passed in as a filter. Instead `{}` is interpreted as a MatchAllDocsQuery.\n\nCloses #17518" } ], "files": [ { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.EmptyQueryBuilder;\n+import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.aggregations.AggregatorBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n@@ -51,7 +52,11 @@ public FilterAggregatorBuilder(String name, QueryBuilder<?> filter) {\n if (filter == null) {\n throw new IllegalArgumentException(\"[filter] must not be null: [\" + name + \"]\");\n }\n- this.filter = filter;\n+ if (filter instanceof EmptyQueryBuilder) {\n+ this.filter = new MatchAllQueryBuilder();\n+ } else {\n+ this.filter = filter;\n+ }\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,6 @@\n \n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.search.aggregations.Aggregator;\n@@ -45,9 +44,7 @@ public FilterAggregatorBuilder parse(String aggregationName, XContentParser pars\n throw new ParsingException(null, \"filter cannot be null in filter aggregation [{}]\", aggregationName);\n }\n \n- FilterAggregatorBuilder factory = new FilterAggregatorBuilder(aggregationName,\n- filter == null ? new MatchAllQueryBuilder() : filter);\n- return factory;\n+ return new FilterAggregatorBuilder(aggregationName, filter);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterParser.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.EmptyQueryBuilder;\n+import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n@@ -70,7 +71,11 @@ public KeyedFilter(String key, QueryBuilder<?> filter) {\n throw new IllegalArgumentException(\"[filter] must not be null\");\n }\n this.key = key;\n- this.filter = filter;\n+ if (filter instanceof EmptyQueryBuilder) {\n+ this.filter = new MatchAllQueryBuilder();\n+ } else {\n+ this.filter = filter;\n+ }\n }\n \n public String key() {", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.BoolQueryBuilder;\n+import org.elasticsearch.index.query.EmptyQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n@@ -108,7 +109,18 @@ public void testSimple() throws Exception {\n // See NullPointer issue when filters are empty:\n // https://github.com/elastic/elasticsearch/issues/8438\n public void testEmptyFilterDeclarations() throws Exception {\n- QueryBuilder emptyFilter = new BoolQueryBuilder();\n+ QueryBuilder<?> emptyFilter = new BoolQueryBuilder();\n+ SearchResponse response = client().prepareSearch(\"idx\").addAggregation(filter(\"tag1\", emptyFilter)).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Filter filter = response.getAggregations().get(\"tag1\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo((long) numDocs));\n+ }\n+\n+ public void testEmptyFilter() throws Exception {\n+ QueryBuilder<?> emptyFilter = new EmptyQueryBuilder();\n SearchResponse response = client().prepareSearch(\"idx\").addAggregation(filter(\"tag1\", emptyFilter)).execute().actionGet();\n \n assertSearchResponse(response);", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/FilterIT.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.BoolQueryBuilder;\n+import org.elasticsearch.index.query.EmptyQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.aggregations.bucket.filters.Filters;\n import org.elasticsearch.search.aggregations.bucket.filters.FiltersAggregator.KeyedFilter;\n@@ -201,6 +202,32 @@ public void testWithSubAggregation() throws Exception {\n assertThat((double) propertiesCounts[1], equalTo((double) sum / numTag2Docs));\n }\n \n+ public void testEmptyFilter() throws Exception {\n+ QueryBuilder<?> emptyFilter = new EmptyQueryBuilder();\n+ SearchResponse response = client().prepareSearch(\"idx\").addAggregation(filters(\"tag1\", emptyFilter)).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Filters filter = response.getAggregations().get(\"tag1\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getBuckets().size(), equalTo(1));\n+ assertThat(filter.getBuckets().get(0).getDocCount(), equalTo((long) numDocs));\n+ }\n+\n+ public void testEmptyKeyedFilter() throws Exception {\n+ QueryBuilder<?> emptyFilter = new EmptyQueryBuilder();\n+ SearchResponse response = client().prepareSearch(\"idx\").addAggregation(filters(\"tag1\", new KeyedFilter(\"foo\", emptyFilter)))\n+ .execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Filters filter = response.getAggregations().get(\"tag1\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getBuckets().size(), equalTo(1));\n+ assertThat(filter.getBuckets().get(0).getKey(), equalTo(\"foo\"));\n+ assertThat(filter.getBuckets().get(0).getDocCount(), equalTo((long) numDocs));\n+ }\n+\n public void testAsSubAggregation() {\n SearchResponse response = client().prepareSearch(\"idx\")\n .addAggregation(", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/FiltersIT.java", "status": "modified" } ] }
{ "body": "CORS headers and methods config parameters must be read as arrays. This\ncommit fixes the issue. It affects http.cors.allow-methods and\nhttp.cors.allow-headers.\n\nFixes #17483 \n", "comments": [ { "body": "@spinscale Do you have spare cycles to review?\n", "created_at": "2016-04-05T21:10:09Z" }, { "body": "this LGTM\n", "created_at": "2016-04-07T17:03:25Z" }, { "body": "Closed by Commit [763a659](https://github.com/elastic/elasticsearch/commit/763a659830d5011a4e944b5f863bc3c8b7e0382e)\n", "created_at": "2016-04-07T19:56:56Z" } ], "number": 17523, "title": "Fixes reading of CORS pre-flight headers and methods" }
{ "body": "CORS headers and methods config parameters must be read as arrays. This\ncommit fixes the issue. It affects http.cors.allow-methods and\nhttp.cors.allow-headers.\n\nBackports #17523\n", "number": 17525, "review_comments": [], "title": "Fixes reading of CORS pre-flight headers and methods" }
{ "commits": [ { "message": "Fixes reading of CORS pre-flight headers and methods\n\nCORS headers and methods config parameters must be read as arrays. This\ncommit fixes the issue. It affects http.cors.allow-methods and\nhttp.cors.allow-headers.\n\nBackports #17523" } ], "files": [ { "diff": "@@ -59,6 +59,7 @@\n import org.jboss.netty.handler.codec.http.HttpMethod;\n import org.jboss.netty.handler.codec.http.HttpRequestDecoder;\n import org.jboss.netty.handler.timeout.ReadTimeoutException;\n+import sun.security.util.Length;\n \n import java.io.IOException;\n import java.net.InetAddress;\n@@ -105,8 +106,8 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent<HttpSer\n public static final int DEFAULT_SETTING_PIPELINING_MAX_EVENTS = 10000;\n public static final String DEFAULT_PORT_RANGE = \"9200-9300\";\n \n- private static final String DEFAULT_CORS_METHODS = \"OPTIONS, HEAD, GET, POST, PUT, DELETE\";\n- private static final String DEFAULT_CORS_HEADERS = \"X-Requested-With, Content-Type, Content-Length\";\n+ private static final String[] DEFAULT_CORS_METHODS = { \"OPTIONS\", \"HEAD\", \"GET\", \"POST\", \"PUT\", \"DELETE\" };\n+ private static final String[] DEFAULT_CORS_HEADERS = { \"X-Requested-With\", \"Content-Type\", \"Content-Length\" };\n private static final int DEFAULT_CORS_MAX_AGE = 1728000;\n \n protected final NetworkService networkService;\n@@ -353,14 +354,14 @@ private CorsConfig buildCorsConfig(Settings settings) {\n if (settings.getAsBoolean(SETTING_CORS_ALLOW_CREDENTIALS, false)) {\n builder.allowCredentials();\n }\n- String[] strMethods = settings.getAsArray(settings.get(SETTING_CORS_ALLOW_METHODS, DEFAULT_CORS_METHODS), new String[0]);\n+ String[] strMethods = settings.getAsArray(SETTING_CORS_ALLOW_METHODS, DEFAULT_CORS_METHODS);\n HttpMethod[] methods = new HttpMethod[strMethods.length];\n for (int i = 0; i < methods.length; i++) {\n methods[i] = HttpMethod.valueOf(strMethods[i]);\n }\n return builder.allowedRequestMethods(methods)\n .maxAge(settings.getAsInt(SETTING_CORS_MAX_AGE, DEFAULT_CORS_MAX_AGE))\n- .allowedRequestHeaders(settings.getAsArray(settings.get(SETTING_CORS_ALLOW_HEADERS, DEFAULT_CORS_HEADERS), new String[0]))\n+ .allowedRequestHeaders(settings.getAsArray(SETTING_CORS_ALLOW_HEADERS, DEFAULT_CORS_HEADERS))\n .shortCircuit()\n .build();\n }", "filename": "core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import org.jboss.netty.handler.codec.http.HttpRequest;\n import org.jboss.netty.handler.codec.http.HttpResponse;\n \n+import java.util.HashSet;\n import java.util.Iterator;\n import java.util.Set;\n \n@@ -214,18 +215,11 @@ private static boolean isPreflightRequest(final HttpRequest request) {\n }\n \n private void setAllowMethods(final HttpResponse response) {\n- Set<HttpMethod> methods = config.allowedRequestMethods();\n- Iterator<HttpMethod> iter = methods.iterator();\n- final int size = methods.size();\n- int count = 0;\n- StringBuilder buf = new StringBuilder();\n- while (iter.hasNext()) {\n- buf.append(iter.next().getName().trim());\n- if (++count < size) {\n- buf.append(\", \");\n- }\n+ Set<String> strMethods = new HashSet<>();\n+ for (HttpMethod method : config.allowedRequestMethods()) {\n+ strMethods.add(method.getName().trim());\n }\n- response.headers().set(ACCESS_CONTROL_ALLOW_METHODS, buf.toString());\n+ response.headers().set(ACCESS_CONTROL_ALLOW_METHODS, strMethods);\n }\n \n private void setAllowHeaders(final HttpResponse response) {", "filename": "core/src/main/java/org/elasticsearch/http/netty/cors/CorsHandler.java", "status": "modified" }, { "diff": "@@ -0,0 +1,97 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.http.netty;\n+\n+import org.elasticsearch.cache.recycler.MockPageCacheRecycler;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.network.NetworkService;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.MockBigArrays;\n+import org.elasticsearch.http.netty.cors.CorsConfig;\n+import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.jboss.netty.handler.codec.http.HttpMethod;\n+import org.junit.After;\n+import org.junit.Before;\n+import org.junit.Test;\n+\n+import java.util.Arrays;\n+import java.util.HashSet;\n+import java.util.Set;\n+\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_CREDENTIALS;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_HEADERS;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_METHODS;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_ORIGIN;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ENABLED;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ * Tests for the {@link NettyHttpServerTransport} class.\n+ */\n+public class NettyHttpServerTransportTests extends ESTestCase {\n+ private NetworkService networkService;\n+ private ThreadPool threadPool;\n+ private MockPageCacheRecycler mockPageCacheRecycler;\n+ private MockBigArrays bigArrays;\n+\n+ @Before\n+ public void setup() throws Exception {\n+ networkService = new NetworkService(Settings.EMPTY);\n+ threadPool = new ThreadPool(\"test\");\n+ mockPageCacheRecycler = new MockPageCacheRecycler(Settings.EMPTY, threadPool);\n+ bigArrays = new MockBigArrays(mockPageCacheRecycler, new NoneCircuitBreakerService());\n+ }\n+\n+ @After\n+ public void shutdown() throws Exception {\n+ if (threadPool != null) {\n+ threadPool.shutdownNow();\n+ }\n+ threadPool = null;\n+ networkService = null;\n+ mockPageCacheRecycler = null;\n+ bigArrays = null;\n+ }\n+\n+ @Test\n+ public void testCorsConfig() throws Exception {\n+ final Set<String> methods = new HashSet<>(Arrays.asList(\"get\", \"options\", \"post\"));\n+ final Set<String> headers = new HashSet<>(Arrays.asList(\"Content-Type\", \"Content-Length\"));\n+ final Settings settings = Settings.builder()\n+ .put(SETTING_CORS_ENABLED, true)\n+ .put(SETTING_CORS_ALLOW_ORIGIN, \"*\")\n+ .put(SETTING_CORS_ALLOW_METHODS, Strings.collectionToCommaDelimitedString(methods))\n+ .put(SETTING_CORS_ALLOW_HEADERS, Strings.collectionToCommaDelimitedString(headers))\n+ .put(SETTING_CORS_ALLOW_CREDENTIALS, true)\n+ .build();\n+ final NettyHttpServerTransport transport = new NettyHttpServerTransport(settings, networkService, bigArrays);\n+ final CorsConfig corsConfig = transport.getCorsConfig();\n+ assertThat(corsConfig.isAnyOriginSupported(), equalTo(true));\n+ assertThat(corsConfig.allowedRequestHeaders(), equalTo(headers));\n+ final Set<String> allowedRequestMethods = new HashSet<>();\n+ for (HttpMethod method : corsConfig.allowedRequestMethods()) {\n+ allowedRequestMethods.add(method.getName());\n+ }\n+ assertThat(allowedRequestMethods, equalTo(methods));\n+ transport.close();\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/http/netty/NettyHttpServerTransportTests.java", "status": "added" } ] }
{ "body": "CORS headers and methods config parameters must be read as arrays. This\ncommit fixes the issue. It affects http.cors.allow-methods and\nhttp.cors.allow-headers.\n\nFixes #17483 \n", "comments": [ { "body": "@spinscale Do you have spare cycles to review?\n", "created_at": "2016-04-05T21:10:09Z" }, { "body": "this LGTM\n", "created_at": "2016-04-07T17:03:25Z" }, { "body": "Closed by Commit [763a659](https://github.com/elastic/elasticsearch/commit/763a659830d5011a4e944b5f863bc3c8b7e0382e)\n", "created_at": "2016-04-07T19:56:56Z" } ], "number": 17523, "title": "Fixes reading of CORS pre-flight headers and methods" }
{ "body": "CORS headers and methods config parameters must be read as arrays. This\ncommit fixes the issue. It affects http.cors.allow-methods and\nhttp.cors.allow-headers.\n\nBackports #17523\n", "number": 17524, "review_comments": [], "title": "Fixes reading of CORS pre-flight headers and methods" }
{ "commits": [ { "message": "Fixes reading of CORS pre-flight headers and methods\n\nCORS headers and methods config parameters must be read as arrays. This\ncommit fixes the issue. It affects http.cors.allow-methods and\nhttp.cors.allow-headers.\n\nBackports #17523" } ], "files": [ { "diff": "@@ -59,6 +59,7 @@\n import org.jboss.netty.handler.codec.http.HttpMethod;\n import org.jboss.netty.handler.codec.http.HttpRequestDecoder;\n import org.jboss.netty.handler.timeout.ReadTimeoutException;\n+import sun.security.util.Length;\n \n import java.io.IOException;\n import java.net.InetAddress;\n@@ -105,8 +106,8 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent<HttpSer\n public static final int DEFAULT_SETTING_PIPELINING_MAX_EVENTS = 10000;\n public static final String DEFAULT_PORT_RANGE = \"9200-9300\";\n \n- private static final String DEFAULT_CORS_METHODS = \"OPTIONS, HEAD, GET, POST, PUT, DELETE\";\n- private static final String DEFAULT_CORS_HEADERS = \"X-Requested-With, Content-Type, Content-Length\";\n+ private static final String[] DEFAULT_CORS_METHODS = { \"OPTIONS\", \"HEAD\", \"GET\", \"POST\", \"PUT\", \"DELETE\" };\n+ private static final String[] DEFAULT_CORS_HEADERS = { \"X-Requested-With\", \"Content-Type\", \"Content-Length\" };\n private static final int DEFAULT_CORS_MAX_AGE = 1728000;\n \n protected final NetworkService networkService;\n@@ -353,14 +354,14 @@ private CorsConfig buildCorsConfig(Settings settings) {\n if (settings.getAsBoolean(SETTING_CORS_ALLOW_CREDENTIALS, false)) {\n builder.allowCredentials();\n }\n- String[] strMethods = settings.getAsArray(settings.get(SETTING_CORS_ALLOW_METHODS, DEFAULT_CORS_METHODS), new String[0]);\n+ String[] strMethods = settings.getAsArray(SETTING_CORS_ALLOW_METHODS, DEFAULT_CORS_METHODS);\n HttpMethod[] methods = new HttpMethod[strMethods.length];\n for (int i = 0; i < methods.length; i++) {\n methods[i] = HttpMethod.valueOf(strMethods[i]);\n }\n return builder.allowedRequestMethods(methods)\n .maxAge(settings.getAsInt(SETTING_CORS_MAX_AGE, DEFAULT_CORS_MAX_AGE))\n- .allowedRequestHeaders(settings.getAsArray(settings.get(SETTING_CORS_ALLOW_HEADERS, DEFAULT_CORS_HEADERS), new String[0]))\n+ .allowedRequestHeaders(settings.getAsArray(SETTING_CORS_ALLOW_HEADERS, DEFAULT_CORS_HEADERS))\n .shortCircuit()\n .build();\n }", "filename": "core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import org.jboss.netty.handler.codec.http.HttpRequest;\n import org.jboss.netty.handler.codec.http.HttpResponse;\n \n+import java.util.HashSet;\n import java.util.Iterator;\n import java.util.Set;\n \n@@ -214,18 +215,11 @@ private static boolean isPreflightRequest(final HttpRequest request) {\n }\n \n private void setAllowMethods(final HttpResponse response) {\n- Set<HttpMethod> methods = config.allowedRequestMethods();\n- Iterator<HttpMethod> iter = methods.iterator();\n- final int size = methods.size();\n- int count = 0;\n- StringBuilder buf = new StringBuilder();\n- while (iter.hasNext()) {\n- buf.append(iter.next().getName().trim());\n- if (++count < size) {\n- buf.append(\", \");\n- }\n+ Set<String> strMethods = new HashSet<>();\n+ for (HttpMethod method : config.allowedRequestMethods()) {\n+ strMethods.add(method.getName().trim());\n }\n- response.headers().set(ACCESS_CONTROL_ALLOW_METHODS, buf.toString());\n+ response.headers().set(ACCESS_CONTROL_ALLOW_METHODS, strMethods);\n }\n \n private void setAllowHeaders(final HttpResponse response) {", "filename": "core/src/main/java/org/elasticsearch/http/netty/cors/CorsHandler.java", "status": "modified" }, { "diff": "@@ -0,0 +1,97 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.http.netty;\n+\n+import org.elasticsearch.cache.recycler.MockPageCacheRecycler;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.network.NetworkService;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.MockBigArrays;\n+import org.elasticsearch.http.netty.cors.CorsConfig;\n+import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.jboss.netty.handler.codec.http.HttpMethod;\n+import org.junit.After;\n+import org.junit.Before;\n+import org.junit.Test;\n+\n+import java.util.Arrays;\n+import java.util.HashSet;\n+import java.util.Set;\n+\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_CREDENTIALS;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_HEADERS;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_METHODS;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ALLOW_ORIGIN;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.SETTING_CORS_ENABLED;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ * Tests for the {@link NettyHttpServerTransport} class.\n+ */\n+public class NettyHttpServerTransportTests extends ESTestCase {\n+ private NetworkService networkService;\n+ private ThreadPool threadPool;\n+ private MockPageCacheRecycler mockPageCacheRecycler;\n+ private MockBigArrays bigArrays;\n+\n+ @Before\n+ public void setup() throws Exception {\n+ networkService = new NetworkService(Settings.EMPTY);\n+ threadPool = new ThreadPool(\"test\");\n+ mockPageCacheRecycler = new MockPageCacheRecycler(Settings.EMPTY, threadPool);\n+ bigArrays = new MockBigArrays(mockPageCacheRecycler, new NoneCircuitBreakerService());\n+ }\n+\n+ @After\n+ public void shutdown() throws Exception {\n+ if (threadPool != null) {\n+ threadPool.shutdownNow();\n+ }\n+ threadPool = null;\n+ networkService = null;\n+ mockPageCacheRecycler = null;\n+ bigArrays = null;\n+ }\n+\n+ @Test\n+ public void testCorsConfig() throws Exception {\n+ final Set<String> methods = new HashSet<>(Arrays.asList(\"get\", \"options\", \"post\"));\n+ final Set<String> headers = new HashSet<>(Arrays.asList(\"Content-Type\", \"Content-Length\"));\n+ final Settings settings = Settings.builder()\n+ .put(SETTING_CORS_ENABLED, true)\n+ .put(SETTING_CORS_ALLOW_ORIGIN, \"*\")\n+ .put(SETTING_CORS_ALLOW_METHODS, Strings.collectionToCommaDelimitedString(methods))\n+ .put(SETTING_CORS_ALLOW_HEADERS, Strings.collectionToCommaDelimitedString(headers))\n+ .put(SETTING_CORS_ALLOW_CREDENTIALS, true)\n+ .build();\n+ final NettyHttpServerTransport transport = new NettyHttpServerTransport(settings, networkService, bigArrays);\n+ final CorsConfig corsConfig = transport.getCorsConfig();\n+ assertThat(corsConfig.isAnyOriginSupported(), equalTo(true));\n+ assertThat(corsConfig.allowedRequestHeaders(), equalTo(headers));\n+ final Set<String> allowedRequestMethods = new HashSet<>();\n+ for (HttpMethod method : corsConfig.allowedRequestMethods()) {\n+ allowedRequestMethods.add(method.getName());\n+ }\n+ assertThat(allowedRequestMethods, equalTo(methods));\n+ transport.close();\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/http/netty/NettyHttpServerTransportTests.java", "status": "added" } ] }
{ "body": "**Elasticsearch version**: 2.3.0\n**JVM version**: 1.8.0_31\n**OS version**: MAC OS X 10.10.5\n**Description of the problem including expected versus actual behavior**:\nDefinition of an accepted a custom header does not seem to work.\n\n**Steps to reproduce**:\n1. Add the following lines to the configuration file\n\n```\nhttp.cors.enabled: true\nhttp.cors.allow-origin: \"*\"\nhttp.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE\nhttp.cors.allow-headers: \"X-Requested-With, Content-Type, Content-Length, X-User\"\n```\n1. Ran an ajax POST request with X-User header.\n2. Getting response \"Request header field x-user is not allowed by Access-Control-Allow-Headers in preflight response.\"\n", "comments": [ { "body": "I am afraid I will need to reopen this.\n\n**Elasticsearch version**: 2.3.0\n**JVM version**: 1.8.0_31\n**OS version**: MAC OS X 10.10.5\n\n**Use of custom header does not seem to work reliably.**\n\nHere is the CORS section of my configuration (elasticsearch.yml) file:\n\n```\nhttp.cors.enabled: true\nhttp.cors.allow-origin: \"*\"\nhttp.cors.allow-headers: \"X-Requested-With, Content-Type, Content-Length, X-User\"\n```\n\nThe client code is performing the following call:\n\n```\njQuery.ajax({\n url: requrl,\n data: reqdata,\n type: 'POST',\n headers: {\"X-User\": user},\n success: function (result, status, xhr) {\n resolve(result);\n },\n error: function (xhr, status, error) {\n reject(error);\n }\n});\n\n```\n\nNow, that request always goes through with Firefox 45.0.1.\nHowever, it does not work for:\n- Safari Version 9.1 (10601.5.17.4)\n- Chrome Version 49.0.2623.110 (64-bit) \n\nWhen inspecting with Chrome I get:\n\n```\nXMLHttpRequest cannot load http://localhost:9200/myindex/mytype/_search. \nRequest header field x-user is not allowed by Access-Control-Allow-Headers in preflight response.\n```\n\nThese are the headers details I see in Chrome:\n\n```\nGeneral\n-----------\nRequest URL:http://localhost:9200/myindex/mytype/_search\nRequest Method:OPTIONS\nStatus Code:200 OK\nRemote Address:[::1]:9200\n\nResponse Headers\n--------------------------\nAccess-Control-Allow-Methods:\nAccess-Control-Allow-Origin:*\nAccess-Control-Max-Age:1728000\ncontent-length:0\ndate:Mon, 04 Apr 2016 13:49:00 GMT\n\nRequest Headers\n-----------------------\nAccept:*/*\nAccept-Encoding:gzip, deflate, sdch\nAccept-Language:en-US,en;q=0.8,it;q=0.6\nAccess-Control-Request-Headers:accept, content-type, x-user\nAccess-Control-Request-Method:POST\nCache-Control:no-cache\nConnection:keep-alive\nHost:localhost:9200\nOrigin:http://localhost:3333\nPragma:no-cache\nReferer:http://localhost:3333/\nUser-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36\n```\n\n**The very same configuration, index and client code works reliably in 2.2.0**.\nAfter seeing in the release notes the entry: “More robust handling of CORS HTTP Access Control” I am wondering if that has anything to do with it.\n", "created_at": "2016-04-04T13:51:32Z" }, { "body": "I have experienced same issue which now breaks Grafana UI.\n", "created_at": "2016-04-04T17:40:17Z" }, { "body": "- The problem appears to be in `NettyHttpServerTransport.buildCorsConfig()` where first parameter to the `getAsArray()` function should be the setting name and not the value of the setting. \n- It affects `cors-allow-headers` as well.\n- It's also specific to `2.3 branch` as the whole settings system seems to have been reworked on `master`.\n\nThe following code\n\n``` java\n String[] strMethods = settings.getAsArray(settings.get(SETTING_CORS_ALLOW_METHODS, DEFAULT_CORS_METHODS), new String[0]);\n HttpMethod[] methods = new HttpMethod[strMethods.length];\n for (int i = 0; i < methods.length; i++) {\n methods[i] = HttpMethod.valueOf(strMethods[i]);\n }\n return builder.allowedRequestMethods(methods)\n .maxAge(settings.getAsInt(SETTING_CORS_MAX_AGE, DEFAULT_CORS_MAX_AGE))\n .allowedRequestHeaders(settings.getAsArray(settings.get(SETTING_CORS_ALLOW_HEADERS, DEFAULT_CORS_HEADERS), new String[0]))\n .shortCircuit()\n .build();\n```\n\nshould say\n\n``` java\n String[] strMethods = settings.getAsArray(SETTING_CORS_ALLOW_METHODS, DEFAULT_CORS_METHODS);\n HttpMethod[] methods = new HttpMethod[strMethods.length];\n for (int i = 0; i < methods.length; i++) {\n methods[i] = HttpMethod.valueOf(strMethods[i]);\n }\n return builder.allowedRequestMethods(methods)\n .maxAge(settings.getAsInt(SETTING_CORS_MAX_AGE, DEFAULT_CORS_MAX_AGE))\n .allowedRequestHeaders(settings.getAsArray(SETTING_CORS_ALLOW_HEADERS, DEFAULT_CORS_HEADERS))\n .shortCircuit()\n .build();\n\n```\n", "created_at": "2016-04-04T19:02:28Z" }, { "body": "Thank you for reporting the issue. We have PRs open to fix them for different versions.\n\n#17523 #17524 #17525 \n", "created_at": "2016-04-05T13:32:37Z" } ], "number": 17483, "title": "Problems with http.cors.allow-methods" }
{ "body": "CORS headers and methods config parameters must be read as arrays. This\ncommit fixes the issue. It affects http.cors.allow-methods and\nhttp.cors.allow-headers.\n\nFixes #17483 \n", "number": 17523, "review_comments": [], "title": "Fixes reading of CORS pre-flight headers and methods" }
{ "commits": [ { "message": "Fixes reading of CORS pre-flight headers and methods\n\nCORS headers and methods config parameters must be read as arrays. This\ncommit fixes the issue. It affects http.cors.allow-methods and\nhttp.cors.allow-headers.\n\nFixes #17483" } ], "files": [ { "diff": "@@ -392,14 +392,14 @@ private CorsConfig buildCorsConfig(Settings settings) {\n if (SETTING_CORS_ALLOW_CREDENTIALS.get(settings)) {\n builder.allowCredentials();\n }\n- String[] strMethods = settings.getAsArray(SETTING_CORS_ALLOW_METHODS.get(settings), new String[0]);\n+ String[] strMethods = settings.getAsArray(SETTING_CORS_ALLOW_METHODS.getKey());\n HttpMethod[] methods = Arrays.asList(strMethods)\n .stream()\n .map(HttpMethod::valueOf)\n .toArray(size -> new HttpMethod[size]);\n return builder.allowedRequestMethods(methods)\n .maxAge(SETTING_CORS_MAX_AGE.get(settings))\n- .allowedRequestHeaders(settings.getAsArray(SETTING_CORS_ALLOW_HEADERS.get(settings), new String[0]))\n+ .allowedRequestHeaders(settings.getAsArray(SETTING_CORS_ALLOW_HEADERS.getKey()))\n .shortCircuit()\n .build();\n }", "filename": "core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java", "status": "modified" }, { "diff": "@@ -31,7 +31,6 @@\n import org.jboss.netty.handler.codec.http.HttpRequest;\n import org.jboss.netty.handler.codec.http.HttpResponse;\n \n-import java.util.List;\n import java.util.stream.Collectors;\n \n import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.ACCESS_CONTROL_ALLOW_CREDENTIALS;\n@@ -214,10 +213,9 @@ private static boolean isPreflightRequest(final HttpRequest request) {\n }\n \n private void setAllowMethods(final HttpResponse response) {\n- response.headers().set(ACCESS_CONTROL_ALLOW_METHODS,\n- String.join(\", \", config.allowedRequestMethods().stream()\n- .map(HttpMethod::getName)\n- .collect(Collectors.toList())).trim());\n+ response.headers().set(ACCESS_CONTROL_ALLOW_METHODS, config.allowedRequestMethods().stream()\n+ .map(m -> m.getName().trim())\n+ .collect(Collectors.toList()));\n }\n \n private void setAllowHeaders(final HttpResponse response) {", "filename": "core/src/main/java/org/elasticsearch/http/netty/cors/CorsHandler.java", "status": "modified" }, { "diff": "@@ -0,0 +1,92 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.http.netty;\n+\n+import org.elasticsearch.cache.recycler.MockPageCacheRecycler;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.network.NetworkService;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.MockBigArrays;\n+import org.elasticsearch.http.netty.cors.CorsConfig;\n+import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.jboss.netty.handler.codec.http.HttpMethod;\n+import org.junit.After;\n+import org.junit.Before;\n+\n+import java.util.Arrays;\n+import java.util.HashSet;\n+import java.util.Set;\n+import java.util.stream.Collectors;\n+\n+import static org.elasticsearch.http.HttpTransportSettings.SETTING_CORS_ALLOW_CREDENTIALS;\n+import static org.elasticsearch.http.HttpTransportSettings.SETTING_CORS_ALLOW_HEADERS;\n+import static org.elasticsearch.http.HttpTransportSettings.SETTING_CORS_ALLOW_METHODS;\n+import static org.elasticsearch.http.HttpTransportSettings.SETTING_CORS_ALLOW_ORIGIN;\n+import static org.elasticsearch.http.HttpTransportSettings.SETTING_CORS_ENABLED;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ * Tests for the {@link NettyHttpServerTransport} class.\n+ */\n+public class NettyHttpServerTransportTests extends ESTestCase {\n+ private NetworkService networkService;\n+ private ThreadPool threadPool;\n+ private MockPageCacheRecycler mockPageCacheRecycler;\n+ private MockBigArrays bigArrays;\n+\n+ @Before\n+ public void setup() throws Exception {\n+ networkService = new NetworkService(Settings.EMPTY);\n+ threadPool = new ThreadPool(\"test\");\n+ mockPageCacheRecycler = new MockPageCacheRecycler(Settings.EMPTY, threadPool);\n+ bigArrays = new MockBigArrays(mockPageCacheRecycler, new NoneCircuitBreakerService());\n+ }\n+\n+ @After\n+ public void shutdown() throws Exception {\n+ if (threadPool != null) {\n+ threadPool.shutdownNow();\n+ }\n+ threadPool = null;\n+ networkService = null;\n+ mockPageCacheRecycler = null;\n+ bigArrays = null;\n+ }\n+\n+ public void testCorsConfig() {\n+ final Set<String> methods = new HashSet<>(Arrays.asList(\"get\", \"options\", \"post\"));\n+ final Set<String> headers = new HashSet<>(Arrays.asList(\"Content-Type\", \"Content-Length\"));\n+ final Settings settings = Settings.builder()\n+ .put(SETTING_CORS_ENABLED.getKey(), true)\n+ .put(SETTING_CORS_ALLOW_ORIGIN.getKey(), \"*\")\n+ .put(SETTING_CORS_ALLOW_METHODS.getKey(), Strings.collectionToCommaDelimitedString(methods))\n+ .put(SETTING_CORS_ALLOW_HEADERS.getKey(), Strings.collectionToCommaDelimitedString(headers))\n+ .put(SETTING_CORS_ALLOW_CREDENTIALS.getKey(), true)\n+ .build();\n+ final NettyHttpServerTransport transport = new NettyHttpServerTransport(settings, networkService, bigArrays, threadPool);\n+ final CorsConfig corsConfig = transport.getCorsConfig();\n+ assertThat(corsConfig.isAnyOriginSupported(), equalTo(true));\n+ assertThat(corsConfig.allowedRequestHeaders(), equalTo(headers));\n+ assertThat(corsConfig.allowedRequestMethods().stream().map(HttpMethod::getName).collect(Collectors.toSet()), equalTo(methods));\n+ transport.close();\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/http/netty/NettyHttpServerTransportTests.java", "status": "added" } ] }
{ "body": "If the `DiskThresholdDecider` is enabled, it will attempt to see how much disk\nspace is used _after_ a shard has been allocated to a node (to know whether it\nis over the high watermark).\n\nHowever, in the case that a shard is using a custom data_path setting, the\nshard's size _may_ not affect the amount of disk on the node's configured\n`path.data`.\n\nThis can lead to weird things like shadow replicas not being relocated within a\ncluster because all nodes think they don't have enough space for them, even\nthough they do because it's a different filesystem entirely.\n", "comments": [], "number": 17460, "title": "DiskThresholdDecider adds shard size even if shard is on different filesystem" }
{ "body": "Otherwise, when trying to calculate the amount of disk usage _after_ the\nshard has been allocated, it has incorrectly subtracted the shadow\nreplica size.\n\nResolves #17460\n", "number": 17509, "review_comments": [], "title": "When considering the size of shadow replica shards, set size to 0" }
{ "commits": [ { "message": "When considering the size of shadow replica indices, set size to 0\n\nOtherwise, when trying to calculate the amount of disk usage *after* the\nshard has been allocated, it has incorrectly subtracted the shadow\nreplica size.\n\nResolves #17460" } ], "files": [ { "diff": "@@ -30,7 +30,10 @@\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n import org.elasticsearch.action.admin.indices.stats.ShardStats;\n import org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction;\n+import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.allocation.decider.DiskThresholdDecider;\n@@ -330,7 +333,7 @@ public void onResponse(IndicesStatsResponse indicesStatsResponse) {\n ShardStats[] stats = indicesStatsResponse.getShards();\n ImmutableOpenMap.Builder<String, Long> newShardSizes = ImmutableOpenMap.builder();\n ImmutableOpenMap.Builder<ShardRouting, String> newShardRoutingToDataPath = ImmutableOpenMap.builder();\n- buildShardLevelInfo(logger, stats, newShardSizes, newShardRoutingToDataPath);\n+ buildShardLevelInfo(logger, stats, newShardSizes, newShardRoutingToDataPath, clusterService.state());\n shardSizes = newShardSizes.build();\n shardRoutingToDataPath = newShardRoutingToDataPath.build();\n }\n@@ -379,14 +382,24 @@ public void onFailure(Throwable e) {\n }\n \n static void buildShardLevelInfo(ESLogger logger, ShardStats[] stats, ImmutableOpenMap.Builder<String, Long> newShardSizes,\n- ImmutableOpenMap.Builder<ShardRouting, String> newShardRoutingToDataPath) {\n+ ImmutableOpenMap.Builder<ShardRouting, String> newShardRoutingToDataPath, ClusterState state) {\n+ MetaData meta = state.getMetaData();\n for (ShardStats s : stats) {\n+ IndexMetaData indexMeta = meta.index(s.getShardRouting().index());\n+ Settings indexSettings = indexMeta == null ? null : indexMeta.getSettings();\n newShardRoutingToDataPath.put(s.getShardRouting(), s.getDataPath());\n long size = s.getStats().getStore().sizeInBytes();\n String sid = ClusterInfo.shardIdentifierFromRouting(s.getShardRouting());\n if (logger.isTraceEnabled()) {\n logger.trace(\"shard: {} size: {}\", sid, size);\n }\n+ if (indexSettings != null && IndexMetaData.isIndexUsingShadowReplicas(indexSettings)) {\n+ // Shards on a shared filesystem should be considered of size 0\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"shard: {} is using shadow replicas and will be treated as size 0\", sid);\n+ }\n+ size = 0;\n+ }\n newShardSizes.put(sid, size);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/InternalClusterInfoService.java", "status": "modified" }, { "diff": "@@ -23,11 +23,16 @@\n import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;\n import org.elasticsearch.action.admin.indices.stats.CommonStats;\n import org.elasticsearch.action.admin.indices.stats.ShardStats;\n+import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingHelper;\n import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.DummyTransportAddress;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.ShardPath;\n@@ -113,7 +118,8 @@ public void testFillShardLevelInfo() {\n };\n ImmutableOpenMap.Builder<String, Long> shardSizes = ImmutableOpenMap.builder();\n ImmutableOpenMap.Builder<ShardRouting, String> routingToPath = ImmutableOpenMap.builder();\n- InternalClusterInfoService.buildShardLevelInfo(logger, stats, shardSizes, routingToPath);\n+ ClusterState state = ClusterState.builder(new ClusterName(\"blarg\")).version(0).build();\n+ InternalClusterInfoService.buildShardLevelInfo(logger, stats, shardSizes, routingToPath, state);\n assertEquals(2, shardSizes.size());\n assertTrue(shardSizes.containsKey(ClusterInfo.shardIdentifierFromRouting(test_0)));\n assertTrue(shardSizes.containsKey(ClusterInfo.shardIdentifierFromRouting(test_1)));\n@@ -127,6 +133,53 @@ public void testFillShardLevelInfo() {\n assertEquals(test1Path.getParent().getParent().getParent().toAbsolutePath().toString(), routingToPath.get(test_1));\n }\n \n+ public void testFillShardsWithShadowIndices() {\n+ final Index index = new Index(\"non-shadow\", \"0xcafe0000\");\n+ ShardRouting s0 = ShardRouting.newUnassigned(index, 0, null, false, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n+ ShardRoutingHelper.initialize(s0, \"node1\");\n+ ShardRoutingHelper.moveToStarted(s0);\n+ Path i0Path = createTempDir().resolve(\"indices\").resolve(index.getUUID()).resolve(\"0\");\n+ CommonStats commonStats0 = new CommonStats();\n+ commonStats0.store = new StoreStats(100, 1);\n+ final Index index2 = new Index(\"shadow\", \"0xcafe0001\");\n+ ShardRouting s1 = ShardRouting.newUnassigned(index2, 0, null, false, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n+ ShardRoutingHelper.initialize(s1, \"node2\");\n+ ShardRoutingHelper.moveToStarted(s1);\n+ Path i1Path = createTempDir().resolve(\"indices\").resolve(index2.getUUID()).resolve(\"0\");\n+ CommonStats commonStats1 = new CommonStats();\n+ commonStats1.store = new StoreStats(1000, 1);\n+ ShardStats[] stats = new ShardStats[] {\n+ new ShardStats(s0, new ShardPath(false, i0Path, i0Path, s0.shardId()), commonStats0 , null),\n+ new ShardStats(s1, new ShardPath(false, i1Path, i1Path, s1.shardId()), commonStats1 , null)\n+ };\n+ ImmutableOpenMap.Builder<String, Long> shardSizes = ImmutableOpenMap.builder();\n+ ImmutableOpenMap.Builder<ShardRouting, String> routingToPath = ImmutableOpenMap.builder();\n+ ClusterState state = ClusterState.builder(new ClusterName(\"blarg\"))\n+ .version(0)\n+ .metaData(MetaData.builder()\n+ .put(IndexMetaData.builder(\"non-shadow\")\n+ .settings(Settings.builder()\n+ .put(IndexMetaData.SETTING_INDEX_UUID, \"0xcafe0000\")\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n+ .numberOfShards(1)\n+ .numberOfReplicas(0))\n+ .put(IndexMetaData.builder(\"shadow\")\n+ .settings(Settings.builder()\n+ .put(IndexMetaData.SETTING_INDEX_UUID, \"0xcafe0001\")\n+ .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)))\n+ .build();\n+ logger.info(\"--> calling buildShardLevelInfo with state: {}\", state);\n+ InternalClusterInfoService.buildShardLevelInfo(logger, stats, shardSizes, routingToPath, state);\n+ assertEquals(2, shardSizes.size());\n+ assertTrue(shardSizes.containsKey(ClusterInfo.shardIdentifierFromRouting(s0)));\n+ assertTrue(shardSizes.containsKey(ClusterInfo.shardIdentifierFromRouting(s1)));\n+ assertEquals(100L, shardSizes.get(ClusterInfo.shardIdentifierFromRouting(s0)).longValue());\n+ assertEquals(0L, shardSizes.get(ClusterInfo.shardIdentifierFromRouting(s1)).longValue());\n+ }\n+\n public void testFillDiskUsage() {\n ImmutableOpenMap.Builder<String, DiskUsage> newLeastAvaiableUsages = ImmutableOpenMap.builder();\n ImmutableOpenMap.Builder<String, DiskUsage> newMostAvaiableUsages = ImmutableOpenMap.builder();", "filename": "core/src/test/java/org/elasticsearch/cluster/DiskUsageTests.java", "status": "modified" } ] }
{ "body": "**Description of the problem including expected versus actual behavior**: bin/elasticsearch-plugin doesn't quote the Java path in the same way that bin/elasticsearch does, which can cause problems when installing plugins and your JAVA_HOME has a space in it. \n\nElasticsearch-plugin: https://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/elasticsearch-plugin#L70\n\nvs\n\nhttps://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/elasticsearch#L98\n\nI admit this isn't likely to cause problems in the real world, because it should only be problematic when there are spaces in the JAVA_HOME path (bad idea!), but if it happens, you end up with `bin/elasticsearch` working, while `bin/elasticsearch-plugin` does not. \n", "comments": [ { "body": "The quotes in the `elasticsearch` script are not part of the value of `JAVA` after the assignment. Thus, the [line](https://github.com/elastic/elasticsearch/blob/7d4ed5b19ebc25d608d70babab0fb08f0c9ea122/distribution/src/main/resources/bin/elasticsearch-plugin#L70) you point to in the `elasticsearch-plugin` script is not the problem.\n\nThe problem here is the [use of `eval` when invoking `java`](https://github.com/elastic/elasticsearch/blob/7d4ed5b19ebc25d608d70babab0fb08f0c9ea122/distribution/src/main/resources/bin/elasticsearch-plugin#L113) using `JAVA`. While `$JAVA` is quoted there, evaluation of the parameters causes those quotes to be removed (this is normal shell behavior)! Note that the `elasticsearch` script [uses `exec` instead](https://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/elasticsearch#129), hence the difference.\n\nA simple solution is to use single quotes or escaped quotes around `\"$JAVA\"` (as other arguments here already do). Note that changing to use `exec` is a much bigger change because of the quoting of arguments that is done earlier in the script.\n\nI opened #17496.\n", "created_at": "2016-04-04T03:35:18Z" }, { "body": "Closed by #17496.\n", "created_at": "2016-04-08T11:25:07Z" } ], "number": 17495, "title": "elasticsearch-plugin doesn't quote the java path" }
{ "body": "This commit quotes the variable that contains the path to the java\nbinary. Without these quotes, when the arguments to eval are evaluated\nthe existing quotes will be removed leading to unquoted use of the path\nto the java binary. If this path contains spaces, evaluation will fail.\n\nCloses #17495\n", "number": 17496, "review_comments": [], "title": "Quote path to java binary" }
{ "commits": [ { "message": "Quote path to java binary\n\nThis commit quotes the variable that contains the path to the java\nbinary. Without these quotes, when the arguments to eval are evaluated\nthe existing quotes will be removed leading to unquoted use of the path\nto the java binary. If this path contains spaces, evaluation will fail." }, { "message": "Add test of plugin script if JAVA_HOME has a space" } ], "files": [ { "diff": "@@ -110,4 +110,4 @@ fi\n HOSTNAME=`hostname | cut -d. -f1`\n export HOSTNAME\n \n-eval \"$JAVA\" -client -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginCli $args\n+eval \"\\\"$JAVA\\\"\" -client -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginCli $args", "filename": "distribution/src/main/resources/bin/elasticsearch-plugin", "status": "modified" }, { "diff": "@@ -455,3 +455,24 @@ fi\n fi\n remove_jvm_example\n }\n+\n+@test \"[$GROUP] test java home with space\" {\n+ # preserve JAVA_HOME\n+ local java_home=$JAVA_HOME\n+\n+ # create a JAVA_HOME with a space\n+ local java=$(which java)\n+ local temp=`mktemp -d --suffix=\"java home\"`\n+ mkdir -p \"$temp/bin\"\n+ ln -s \"$java\" \"$temp/bin/java\"\n+ export JAVA_HOME=\"$temp\"\n+\n+ # this will fail if the elasticsearch-plugin script does not\n+ # properly handle JAVA_HOME with spaces\n+ \"$ESHOME/bin/elasticsearch-plugin\" list\n+\n+ rm -rf \"$temp\"\n+\n+ # restore JAVA_HOME\n+ export JAVA_HOME=$java_home\n+}", "filename": "qa/vagrant/src/test/resources/packaging/scripts/module_and_plugin_test_cases.bash", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 5.0.0-alpha1 78ab6c5b7ff82da7f7d3c059b4a43d80bad188fb\n\n**JVM version**: `OpenJDK Runtime Environment (build 1.8.0_72-internal-b15)`\n\n**OS version**: Ubuntu 14.04\n\n**Description of the problem including expected versus actual behavior**:\n\nElasticsearch expects max open file descriptors to be set to 65536, but init scripts set it to 65535.\n\n**Steps to reproduce**:\n1. Launch elasticsearch using `/etc/init.d/elasticsearch`\n\n**Provide logs (if relevant)**:\n\n```\n[2016-03-30 23:12:05,787][ERROR][bootstrap ] Exception\njava.lang.RuntimeException: max file descriptors [65535] for elasticsearch process likely too low, increase to at least [65536]\n at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:79)\n at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:60)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:188)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:264)\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:111)\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:106)\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:88)\n at org.elasticsearch.cli.Command.main(Command.java:53)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:74)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\n```\n", "comments": [ { "body": "I had the setting in /etc/security/limits.conf with alpha3 release and starting ES still throws the error \nelasticsearch - nofile 65536\n\nand init file contained following lines:\n\n# Run Elasticsearch as this user ID and group ID\n\nES_USER=elasticsearch\nES_GROUP=elasticsearch\n\n# Maximum number of open files\n\nMAX_OPEN_FILES=65536\n\nError logs:\n\n[2016-06-02 20:05:44,637][ERROR][bootstrap ] [es-tst-m01] Exception\njava.lang.RuntimeException: bootstrap checks failed\nmax file descriptors [65535] for elasticsearch process likely too low, increase to at least [65536]\n at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:125)\n at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:85)\n", "created_at": "2016-06-02T20:11:52Z" }, { "body": "@ajaybhatnagar How are you starting Elasticsearch? What are the limits for the root user? Note that the limits for the root user must be at least as high as the elasticsearch user.\n", "created_at": "2016-06-02T20:19:29Z" }, { "body": "Root and es user settings for nofile:\nroot - nofile 100000\nelasticsearch - nofile 65536\n\nulimit -n output for ES user:\n65536\n\nStarting ES with service elasticsearch start\n", "created_at": "2016-06-02T20:44:18Z" }, { "body": "@ajaybhatnagar I'm going to make some assumptions here, please correct if any of them are wrong.\n\n> Starting ES with service elasticsearch start\n\nI'm assuming that you deliberately left `sudo` off here. From this, I'm assuming that you are thus logged in as the root user. I'm further assuming that you made the above changes to `/etc/security/limits.conf` and did not logout and log back in.\n\nIf these assumptions are correct, the solution to the problem is for you to just log out and log back in.\n", "created_at": "2016-06-02T22:18:11Z" }, { "body": "Rebooted the node and still the same error. \n\nInit script has the line below, yet startup is finding max file descriptors below the configured value. Hardcoded value picked from somewhere else or overwritten ?\n\n# Maximum number of open files\n\nMAX_OPEN_FILES=65536\nCheck on the node:\nroot@es-tst-m01:/es1/logs/elasticsearch# ulimit -n\n100000\nroot@es-tst-m01:/es1/logs/elasticsearch# su - elasticsearch\nelasticsearch@es-tst-m01:~$ ulimit -n\n65536\n[2016-06-03 13:35:51,324][ERROR][bootstrap ] [es-tst-m01] Exception\njava.lang.RuntimeException: bootstrap checks failed\nmax file descriptors [65535] for elasticsearch process likely too low, increase to at least [65536]\n", "created_at": "2016-06-03T13:42:36Z" }, { "body": "You have a configuration error somewhere, it's just a matter of finding where. Can you check `cat /proc/sys/fs/file-max`? If it's too low, execute `sysctl -w fs.file-max=65536` or some other higher value to raise it. Consider putting this value in `/etc/sysctl.conf` so that it persists across system reboots. You will have to logout and log back in if you changed this value.\n", "created_at": "2016-06-03T13:56:36Z" }, { "body": "I ran into the same problem just now. And had to change the nofiles in /usr/lib/systemd/system/elasticsearch.service. After changing this to the same value as fs.file-max the message disappeared.\n", "created_at": "2016-06-03T14:23:25Z" }, { "body": "Caused by setting in /etc/deafult/elasticsearch. Closed.\nthx\n", "created_at": "2016-06-03T15:56:59Z" }, { "body": "hi \ni have the same error\nbut i don't have /etc/deafult/elasticsearch\ncan u help me \nthx\n", "created_at": "2016-06-06T01:53:40Z" }, { "body": "@X-Mars This issue is closed, and was specific to how the defaults that Elasticsearch shipped with were inconsistent with a warning log message that it would produce. Please open a new post on the [Elastic Discourse forum](https://discuss.elastic.co) with details about your setup and debugging steps that you have already gone through.\n", "created_at": "2016-06-06T01:57:00Z" }, { "body": "I want to run elasticsearch in docker, but it shows the error when i am running with the command 'docker-compose up'. what should i do with this?\r\n\r\nMy docker-compose.yml:\r\n\r\n<pre>\r\nversion: '2.0'\r\nservices:\r\n elasticsearch:\r\n image: docker.elastic.co/elasticsearch/elasticsearch:6.1.0\r\n container_name: elasticsearch\r\n environment:\r\n - cluster.name=docker-cluster\r\n - bootstrap.memory_lock=true\r\n - \"ES_JAVA_OPTS=-Xms512m -Xmx512m\"\r\n ulimits:\r\n memlock:\r\n soft: -1\r\n hard: -1\r\n volumes:\r\n - esdata1:/usr/share/elasticsearch/data\r\n ports:\r\n - 9200:9200\r\n networks:\r\n - esnet\r\n elasticsearch2:\r\n image: docker.elastic.co/elasticsearch/elasticsearch:6.1.0\r\n container_name: elasticsearch2\r\n environment:\r\n - cluster.name=docker-cluster\r\n - bootstrap.memory_lock=true\r\n - \"ES_JAVA_OPTS=-Xms512m -Xmx512m\"\r\n - \"discovery.zen.ping.unicast.hosts=elasticsearch\"\r\n ulimits:\r\n memlock:\r\n soft: -1\r\n hard: -1\r\n volumes:\r\n - esdata2:/usr/share/elasticsearch/data\r\n networks:\r\n - esnet\r\n\r\nvolumes:\r\n esdata1:\r\n driver: local\r\n esdata2:\r\n driver: local\r\n\r\nnetworks:\r\n esnet:\r\n</pre>\r\n", "created_at": "2017-12-18T10:47:23Z" }, { "body": "@and1990 Please look at the content in the [comment](https://github.com/elastic/elasticsearch/issues/17430#issuecomment-223853511) immediately above yours.", "created_at": "2017-12-18T10:50:22Z" }, { "body": "@jasontedor Thanks. I have solved the problem. Those guys who use elasticsearch in docker have a look at this document https://blog.docker.com/2015/04/docker-release-1-6, the Ulimits section. ", "created_at": "2017-12-19T05:40:45Z" }, { "body": "and reboot system", "created_at": "2018-04-09T00:49:46Z" } ], "number": 17430, "title": "Elasticsearch init scripts set max open files to 65535, but expects 65536" }
{ "body": "Relates to #17430\n", "number": 17431, "review_comments": [], "title": "Set MAX_OPEN_FILES to 65536" }
{ "commits": [ { "message": "Set MAX_OPEN_FILES to 65536\n\nRelates to #17430" } ], "files": [ { "diff": "@@ -60,7 +60,7 @@ ES_HOME=/usr/share/$NAME\n #ES_JAVA_OPTS=\n \n # Maximum number of open files\n-MAX_OPEN_FILES=65535\n+MAX_OPEN_FILES=65536\n \n # Maximum amount of locked memory\n #MAX_LOCKED_MEMORY=", "filename": "distribution/deb/src/main/packaging/init.d/elasticsearch", "status": "modified" }, { "diff": "@@ -35,7 +35,7 @@ fi\n ES_USER=\"elasticsearch\"\n ES_GROUP=\"elasticsearch\"\n ES_HOME=\"/usr/share/elasticsearch\"\n-MAX_OPEN_FILES=65535\n+MAX_OPEN_FILES=65536\n MAX_MAP_COUNT=262144\n LOG_DIR=\"/var/log/elasticsearch\"\n DATA_DIR=\"/var/lib/elasticsearch\"", "filename": "distribution/rpm/src/main/packaging/init.d/elasticsearch", "status": "modified" }, { "diff": "@@ -60,7 +60,7 @@ ES_STARTUP_SLEEP_TIME=5\n # Specifies the maximum file descriptor number that can be opened by this process\n # When using Systemd, this setting is ignored and the LimitNOFILE defined in\n # /usr/lib/systemd/system/elasticsearch.service takes precedence\n-#MAX_OPEN_FILES=65535\n+#MAX_OPEN_FILES=65536\n \n # The maximum number of bytes of memory that may be locked into RAM\n # Set to \"unlimited\" if you use the 'bootstrap.mlockall: true' option", "filename": "distribution/src/main/packaging/env/elasticsearch", "status": "modified" }, { "diff": "@@ -29,7 +29,7 @@ StandardOutput=journal\n StandardError=inherit\n \n # Specifies the maximum file descriptor number that can be opened by this process\n-LimitNOFILE=65535\n+LimitNOFILE=65536\n \n # Specifies the maximum number of bytes of memory that may be locked into RAM\n # Set to \"infinity\" if you use the 'bootstrap.mlockall: true' option", "filename": "distribution/src/main/packaging/systemd/elasticsearch.service", "status": "modified" }, { "diff": "@@ -16,7 +16,7 @@ Each package features a configuration file, which allows you to set the followin\n `ES_HEAP_SIZE`:: The heap size to start with\n `ES_HEAP_NEWSIZE`:: The size of the new generation heap\n `ES_DIRECT_SIZE`:: The maximum size of the direct memory\n-`MAX_OPEN_FILES`:: Maximum number of open files, defaults to `65535`\n+`MAX_OPEN_FILES`:: Maximum number of open files, defaults to `65536`\n `MAX_LOCKED_MEMORY`:: Maximum locked memory size. Set to \"unlimited\" if you use the bootstrap.mlockall option in elasticsearch.yml. You must also set ES_HEAP_SIZE.\n `MAX_MAP_COUNT`:: Maximum number of memory map areas a process may have. If you use `mmapfs` as index store type, make sure this is set to a high value. For more information, check the https://github.com/torvalds/linux/blob/master/Documentation/sysctl/vm.txt[linux kernel documentation] about `max_map_count`. This is set via `sysctl` before starting elasticsearch. Defaults to `65535`\n `LOG_DIR`:: Log directory, defaults to `/var/log/elasticsearch`", "filename": "docs/reference/setup/as-a-service.asciidoc", "status": "modified" } ] }
{ "body": "I tested this both from Kibana and using the HTTP API of elasticsearch to check that is a problem of elasticsearch instead of Kibana:\r\n\r\nThis request:\r\n\r\n```\r\ncurl -XPOST \"127.0.0.1:9200/my_index_*/_search?pretty\" -d '{\r\n \"aggs\" : {\r\n \"grades_stats\" : { \"extended_stats\" : { \"field\" : \"my_field\" } }\r\n }\r\n}'\r\n```\r\n\r\nReturns this\r\n\r\n```\r\n\"aggregations\" : {\r\n \"grades_stats\" : {\r\n \"count\" : 24526,\r\n \"min\" : 0.0,\r\n \"max\" : 5545.0,\r\n \"avg\" : 108.78504444263231,\r\n \"sum\" : 2668062.0,\r\n \"sum_of_squares\" : 1.461725356E9,\r\n \"variance\" : 47764.825603616635,\r\n \"std_deviation\" : 218.5516543145273,\r\n \"std_deviation_bounds\" : {\r\n \"upper\" : 108.78504444263231,\r\n \"lower\" : 108.78504444263231\r\n }\r\n }\r\n }\r\n```\r\n\r\nWith a wrong **std_deviation_bounds**. Only 24526 documents from index my_index_docs_with_my_field have the field my_field so it should does not matter that I just use my_index_\\* instead of my_index_docs_with_my_field. In fact, the next request, specifying the index my_index_docs_with_my_field returns the same count of documents, 24526 and the same variance and std_deviation but with different std_deviation_bounds. These std_deviation_bounds make sense because are equal to avg+/-2*std_deviation while the first request seems wrong.\r\n\r\nThe second request:\r\n\r\n```\r\ncurl -XPOST \"127.0.0.1:9200/my_index_docs_with_my_field/_search?pretty\" -d '{\r\n \"aggs\" : {\r\n \"grades_stats\" : { \"extended_stats\" : { \"field\" : \"my_field\" } }\r\n }\r\n}'\r\n```\r\n\r\nreturns\r\n\r\n```\r\n\"aggregations\" : {\r\n \"grades_stats\" : {\r\n \"count\" : 24526,\r\n \"min\" : 0.0,\r\n \"max\" : 5545.0,\r\n \"avg\" : 108.78504444263231,\r\n \"sum\" : 2668062.0,\r\n \"sum_of_squares\" : 1.461725356E9,\r\n \"variance\" : 47764.825603616635,\r\n \"std_deviation\" : 218.5516543145273,\r\n \"std_deviation_bounds\" : {\r\n \"upper\" : 545.8883530716869,\r\n \"lower\" : -328.3182641864223\r\n }\r\n }\r\n }\r\n```\r\n\r\nIs this behaviour normal? If so, why the first std_deviation_bounds is not equal to avg+/-2*std_deviation?\r\n", "comments": [ { "body": "This is a bug indeed. When some requested indices do not have mappings for the requested field, a parameter that is used to compute this bounds (sigma) is not propagated correctly.\n", "created_at": "2016-03-29T17:08:28Z" } ], "number": 17362, "title": "Different extended stats with apparently same query" }
{ "body": "Because sigma is also used at reduce time, it should be passed to empty aggs.\nOtherwise it causes bugs when an empty aggregation is used to perform reduction\nis it would assume a sigma of zero.\n\nCloses #17362\n", "number": 17388, "review_comments": [ { "body": "Should we really throw an error here at runtime? I thought Errors should be considered as irrecoverable? Should this instead be an IllegalStateException, or maybe an assert (if we only care about checking in tests)?\n", "created_at": "2016-04-06T09:05:00Z" }, { "body": "I will make it an IllegalStateException.\n", "created_at": "2016-04-06T09:07:00Z" } ], "title": "ExtendedStatsAggregator should also pass sigma to emtpy aggs." }
{ "commits": [ { "message": "ExtendedStatsAggregator should also pass sigma to emtpy aggs. #17388\n\nBecause sigma is also used at reduce time, it should be passed to empty aggs.\nOtherwise it causes bugs when an empty aggregation is used to perform reduction\nis it would assume a sigma of zero.\n\nCloses #17362" } ], "files": [ { "diff": "@@ -193,7 +193,7 @@ public InternalAggregation buildAggregation(long bucket) {\n \n @Override\n public InternalAggregation buildEmptyAggregation() {\n- return new InternalExtendedStats(name, 0, 0d, Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY, 0d, 0d, formatter, pipelineAggregators(),\n+ return new InternalExtendedStats(name, 0, 0d, Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY, 0d, sigma, formatter, pipelineAggregators(),\n metaData());\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java", "status": "modified" }, { "diff": "@@ -148,6 +148,9 @@ public InternalExtendedStats doReduce(List<InternalAggregation> aggregations, Re\n double sumOfSqrs = 0;\n for (InternalAggregation aggregation : aggregations) {\n InternalExtendedStats stats = (InternalExtendedStats) aggregation;\n+ if (stats.sigma != sigma) {\n+ throw new IllegalStateException(\"Cannot reduce other stats aggregations that have a different sigma\");\n+ }\n sumOfSqrs += stats.getSumOfSquares();\n }\n final InternalStats stats = super.doReduce(aggregations, reduceContext);", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/InternalExtendedStats.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n package org.elasticsearch.messy.tests;\n \n import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.action.search.ShardSearchFailure;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n@@ -129,6 +128,24 @@ public void testUnmapped() throws Exception {\n assertThat(Double.isNaN(stats.getStdDeviationBound(ExtendedStats.Bounds.LOWER)), is(true));\n }\n \n+ public void testPartiallyUnmapped() {\n+ double sigma = randomDouble() * 5;\n+ ExtendedStats s1 = client().prepareSearch(\"idx\")\n+ .addAggregation(extendedStats(\"stats\").field(\"value\").sigma(sigma)).get()\n+ .getAggregations().get(\"stats\");\n+ ExtendedStats s2 = client().prepareSearch(\"idx\", \"idx_unmapped\")\n+ .addAggregation(extendedStats(\"stats\").field(\"value\").sigma(sigma)).get()\n+ .getAggregations().get(\"stats\");\n+ assertEquals(s1.getAvg(), s2.getAvg(), 1e-10);\n+ assertEquals(s1.getCount(), s2.getCount());\n+ assertEquals(s1.getMin(), s2.getMin(), 0d);\n+ assertEquals(s1.getMax(), s2.getMax(), 0d);\n+ assertEquals(s1.getStdDeviation(), s2.getStdDeviation(), 1e-10);\n+ assertEquals(s1.getSumOfSquares(), s2.getSumOfSquares(), 1e-10);\n+ assertEquals(s1.getStdDeviationBound(Bounds.LOWER), s2.getStdDeviationBound(Bounds.LOWER), 1e-10);\n+ assertEquals(s1.getStdDeviationBound(Bounds.UPPER), s2.getStdDeviationBound(Bounds.UPPER), 1e-10);\n+ }\n+\n @Override\n public void testSingleValuedField() throws Exception {\n double sigma = randomDouble() * randomIntBetween(1, 10);\n@@ -584,17 +601,6 @@ public void testOrderByEmptyAggregation() throws Exception {\n }\n }\n \n- private void assertShardExecutionState(SearchResponse response, int expectedFailures) throws Exception {\n- ShardSearchFailure[] failures = response.getShardFailures();\n- if (failures.length != expectedFailures) {\n- for (ShardSearchFailure failure : failures) {\n- logger.error(\"Shard Failure: {}\", failure.getCause(), failure);\n- }\n- fail(\"Unexpected shard failures!\");\n- }\n- assertThat(\"Not all shards are initialized\", response.getSuccessfulShards(), equalTo(response.getTotalShards()));\n- }\n-\n private void checkUpperLowerBounds(ExtendedStats stats, double sigma) {\n assertThat(stats.getStdDeviationBound(ExtendedStats.Bounds.UPPER), equalTo(stats.getAvg() + (stats.getStdDeviation() * sigma)));\n assertThat(stats.getStdDeviationBound(ExtendedStats.Bounds.LOWER), equalTo(stats.getAvg() - (stats.getStdDeviation() * sigma)));", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/ExtendedStatsTests.java", "status": "modified" }, { "diff": "@@ -31,6 +31,8 @@\n import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.stats.Stats;\n+import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;\n+import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats.Bounds;\n \n import java.util.Collection;\n import java.util.Collections;\n@@ -40,6 +42,7 @@\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.extendedStats;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n@@ -106,6 +109,19 @@ public void testUnmapped() throws Exception {\n assertThat(stats.getCount(), equalTo(0L));\n }\n \n+ public void testPartiallyUnmapped() {\n+ Stats s1 = client().prepareSearch(\"idx\")\n+ .addAggregation(stats(\"stats\").field(\"value\")).get()\n+ .getAggregations().get(\"stats\");\n+ ExtendedStats s2 = client().prepareSearch(\"idx\", \"idx_unmapped\")\n+ .addAggregation(stats(\"stats\").field(\"value\")).get()\n+ .getAggregations().get(\"stats\");\n+ assertEquals(s1.getAvg(), s2.getAvg(), 1e-10);\n+ assertEquals(s1.getCount(), s2.getCount());\n+ assertEquals(s1.getMin(), s2.getMin(), 0d);\n+ assertEquals(s1.getMax(), s2.getMax(), 0d);\n+ }\n+\n @Override\n public void testSingleValuedField() throws Exception {\n SearchResponse searchResponse = client().prepareSearch(\"idx\")", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/StatsTests.java", "status": "modified" } ] }
{ "body": "**Observations**\n- ES version 2.2.1\n- If number of documents indexed is 2 , there is no error or issue\n- With following recreation , I am seeing array_index_out_of_bounds_exception exception \n\n```\ncurl -XDELETE 'http://localhost:9200/vm/'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm1\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm2\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm3\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm4\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm5\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm6\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm7\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm8\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm9\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm10\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm' -d '{ \"name\" : \"vm11\" , \"rank\" : 2 , \"date\" : 2000 }'\necho\ncurl -XPOST 'http://localhost:9200/vm/_refresh'\necho\ncurl -XPOST 'http://localhost:9200/vm/vm/_search?pretty' -d '{\n \"size\": 0,\n \"aggs\": {\n \"keywords\": {\n \"terms\": {\n \"collect_mode\": \"breadth_first\",\n \"field\": \"name\",\n \"size\": 1,\n \"order\": {\n \"dateB>rankAvg\": \"asc\"\n }\n },\n \"aggs\": {\n \"dateB\": {\n \"filter\": {\n \"term\": {\n \"date\": 2001\n }\n },\n \"aggs\": {\n \"rankAvg\": {\n \"min\": {\n \"field\": \"rank\",\n \"missing\": 1000\n }\n }\n }\n }\n }\n }\n }\n}'\n```\n\nResponse\n\n```\n{\n \"took\" : 13,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 3,\n \"failed\" : 2,\n \"failures\" : [ {\n \"shard\" : 2,\n \"index\" : \"vm\",\n \"node\" : \"xpPiHlvJQjSgYu-L10WJuA\",\n \"reason\" : {\n \"type\" : \"array_index_out_of_bounds_exception\",\n \"reason\" : \"1\"\n }\n } ]\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 0.0,\n \"hits\" : [ ]\n },\n \"aggregations\" : {\n \"keywords\" : {\n \"doc_count_error_upper_bound\" : 0,\n \"sum_other_doc_count\" : 1,\n \"buckets\" : [ {\n \"key\" : \"vm10\",\n \"doc_count\" : 1,\n \"dateB\" : {\n \"doc_count\" : 0,\n \"rankAvg\" : {\n \"value\" : null\n }\n }\n } ]\n }\n }\n}\n```\n\nLogs emitted\n\n```\n[2016-03-21 21:53:55,238][DEBUG][action.search.type ] [Gertrude Yorkes] [vm][4], node[pkRBZKPTStiOaYYr4mH_-w], [P], v[2], s[STARTED], a[id=Tf6t7fTWRz-FSvU30ln-mA]: Failed to execute [org.elasticsearch.action.search.SearchRequest@a062c99] lastShard [true]\nRemoteTransportException[[Gertrude Yorkes][127.0.0.1:9300][indices:data/read/search[phase/query]]]; nested: ArrayIndexOutOfBoundsException[1];\nCaused by: java.lang.ArrayIndexOutOfBoundsException: 1\n at org.elasticsearch.common.util.BigArrays$DoubleArrayWrapper.get(BigArrays.java:260)\n at org.elasticsearch.search.aggregations.metrics.min.MinAggregator.metric(MinAggregator.java:100)\n at org.elasticsearch.search.aggregations.bucket.terms.InternalOrder$Aggregation$3.compare(InternalOrder.java:213)\n at org.elasticsearch.search.aggregations.bucket.terms.InternalOrder$Aggregation$3.compare(InternalOrder.java:210)\n at org.elasticsearch.search.aggregations.bucket.terms.InternalOrder$CompoundOrder$CompoundOrderComparator.compare(InternalOrder.java:280)\n at org.elasticsearch.search.aggregations.bucket.terms.InternalOrder$CompoundOrder$CompoundOrderComparator.compare(InternalOrder.java:266)\n at org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue.lessThan(BucketPriorityQueue.java:37)\n at org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue.lessThan(BucketPriorityQueue.java:26)\n at org.apache.lucene.util.PriorityQueue.upHeap(PriorityQueue.java:258)\n at org.apache.lucene.util.PriorityQueue.add(PriorityQueue.java:135)\n at org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:151)\n at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator.buildAggregation(GlobalOrdinalsStringTermsAggregator.java:176)\n at org.elasticsearch.search.aggregations.AggregationPhase.execute(AggregationPhase.java:167)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:119)\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:364)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:376)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n\n```\n", "comments": [ { "body": "@colings86 could you take a look at this please\n", "created_at": "2016-03-25T13:43:49Z" }, { "body": "Thanks guys. \n", "created_at": "2016-04-07T02:44:04Z" } ], "number": 17225, "title": "Getting array_index_out_of_bounds_exception while using order_by" }
{ "body": "If a terms aggregation was ordered by a metric nested in a single bucket aggregator which did not collect any documents (e.g. a filters aggregation which did not match in that term bucket) an ArrayOutOfBoundsException would be thrown when the ordering code tried to retrieve the value for the metric. This fix fixes all numeric metric aggregators so they return their default value when a bucket ordinal is requested which was not collected.\n\nCloses #17225\n", "number": 17379, "review_comments": [], "title": "Prevents exception being raised when ordering by an aggregation which wasn't collected" }
{ "commits": [ { "message": "Prevents exception being raised when ordering by an aggregation which wasn't collected\n\nIf a terms aggregation was ordered by a metric nested in a single bucket aggregator which did not collect any documents (e.g. a filters aggregation which did not match in that term bucket) an ArrayOutOfBoundsException would be thrown when the ordering code tried to retrieve the value for the metric. This fix fixes all numeric metric aggregators so they return their default value when a bucket ordinal is requested which was not collected.\n\nCloses #17225" } ], "files": [ { "diff": "@@ -94,7 +94,10 @@ public void collect(int doc, long bucket) throws IOException {\n \n @Override\n public double metric(long owningBucketOrd) {\n- return valuesSource == null ? Double.NaN : sums.get(owningBucketOrd) / counts.get(owningBucketOrd);\n+ if (valuesSource == null || owningBucketOrd >= sums.size()) {\n+ return Double.NaN;\n+ }\n+ return sums.get(owningBucketOrd) / counts.get(owningBucketOrd);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java", "status": "modified" }, { "diff": "@@ -96,7 +96,10 @@ public void collect(int doc, long bucket) throws IOException {\n \n @Override\n public double metric(long owningBucketOrd) {\n- return valuesSource == null ? Double.NEGATIVE_INFINITY : maxes.get(owningBucketOrd);\n+ if (valuesSource == null || owningBucketOrd >= maxes.size()) {\n+ return Double.NEGATIVE_INFINITY;\n+ }\n+ return maxes.get(owningBucketOrd);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java", "status": "modified" }, { "diff": "@@ -95,7 +95,10 @@ public void collect(int doc, long bucket) throws IOException {\n \n @Override\n public double metric(long owningBucketOrd) {\n- return valuesSource == null ? Double.POSITIVE_INFINITY : mins.get(owningBucketOrd);\n+ if (valuesSource == null || owningBucketOrd >= mins.size()) {\n+ return Double.POSITIVE_INFINITY;\n+ }\n+ return mins.get(owningBucketOrd);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java", "status": "modified" }, { "diff": "@@ -128,12 +128,23 @@ public boolean hasMetric(String name) {\n \n @Override\n public double metric(String name, long owningBucketOrd) {\n+ if (valuesSource == null || owningBucketOrd >= counts.size()) {\n+ switch(InternalStats.Metrics.resolve(name)) {\n+ case count: return 0;\n+ case sum: return 0;\n+ case min: return Double.POSITIVE_INFINITY;\n+ case max: return Double.NEGATIVE_INFINITY;\n+ case avg: return Double.NaN;\n+ default:\n+ throw new IllegalArgumentException(\"Unknown value [\" + name + \"] in common stats aggregation\");\n+ }\n+ }\n switch(InternalStats.Metrics.resolve(name)) {\n- case count: return valuesSource == null ? 0 : counts.get(owningBucketOrd);\n- case sum: return valuesSource == null ? 0 : sums.get(owningBucketOrd);\n- case min: return valuesSource == null ? Double.POSITIVE_INFINITY : mins.get(owningBucketOrd);\n- case max: return valuesSource == null ? Double.NEGATIVE_INFINITY : maxes.get(owningBucketOrd);\n- case avg: return valuesSource == null ? Double.NaN : sums.get(owningBucketOrd) / counts.get(owningBucketOrd);\n+ case count: return counts.get(owningBucketOrd);\n+ case sum: return sums.get(owningBucketOrd);\n+ case min: return mins.get(owningBucketOrd);\n+ case max: return maxes.get(owningBucketOrd);\n+ case avg: return sums.get(owningBucketOrd) / counts.get(owningBucketOrd);\n default:\n throw new IllegalArgumentException(\"Unknown value [\" + name + \"] in common stats aggregation\");\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggregator.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n import org.elasticsearch.search.aggregations.LeafBucketCollectorBase;\n import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator;\n+import org.elasticsearch.search.aggregations.metrics.stats.InternalStats;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.support.AggregationContext;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n@@ -140,20 +141,34 @@ public boolean hasMetric(String name) {\n \n @Override\n public double metric(String name, long owningBucketOrd) {\n+ if (valuesSource == null || owningBucketOrd >= counts.size()) {\n+ switch(InternalExtendedStats.Metrics.resolve(name)) {\n+ case count: return 0;\n+ case sum: return 0;\n+ case min: return Double.POSITIVE_INFINITY;\n+ case max: return Double.NEGATIVE_INFINITY;\n+ case avg: return Double.NaN;\n+ case sum_of_squares: return 0;\n+ case variance: return Double.NaN;\n+ case std_deviation: return Double.NaN;\n+ case std_upper: return Double.NaN;\n+ case std_lower: return Double.NaN;\n+ default:\n+ throw new IllegalArgumentException(\"Unknown value [\" + name + \"] in common stats aggregation\");\n+ }\n+ }\n switch(InternalExtendedStats.Metrics.resolve(name)) {\n- case count: return valuesSource == null ? 0 : counts.get(owningBucketOrd);\n- case sum: return valuesSource == null ? 0 : sums.get(owningBucketOrd);\n- case min: return valuesSource == null ? Double.POSITIVE_INFINITY : mins.get(owningBucketOrd);\n- case max: return valuesSource == null ? Double.NEGATIVE_INFINITY : maxes.get(owningBucketOrd);\n- case avg: return valuesSource == null ? Double.NaN : sums.get(owningBucketOrd) / counts.get(owningBucketOrd);\n- case sum_of_squares: return valuesSource == null ? 0 : sumOfSqrs.get(owningBucketOrd);\n- case variance: return valuesSource == null ? Double.NaN : variance(owningBucketOrd);\n- case std_deviation: return valuesSource == null ? Double.NaN : Math.sqrt(variance(owningBucketOrd));\n+ case count: return counts.get(owningBucketOrd);\n+ case sum: return sums.get(owningBucketOrd);\n+ case min: return mins.get(owningBucketOrd);\n+ case max: return maxes.get(owningBucketOrd);\n+ case avg: return sums.get(owningBucketOrd) / counts.get(owningBucketOrd);\n+ case sum_of_squares: return sumOfSqrs.get(owningBucketOrd);\n+ case variance: return variance(owningBucketOrd);\n+ case std_deviation: return Math.sqrt(variance(owningBucketOrd));\n case std_upper:\n- if (valuesSource == null) { return Double.NaN; }\n return (sums.get(owningBucketOrd) / counts.get(owningBucketOrd)) + (Math.sqrt(variance(owningBucketOrd)) * this.sigma);\n case std_lower:\n- if (valuesSource == null) { return Double.NaN; }\n return (sums.get(owningBucketOrd) / counts.get(owningBucketOrd)) - (Math.sqrt(variance(owningBucketOrd)) * this.sigma);\n default:\n throw new IllegalArgumentException(\"Unknown value [\" + name + \"] in common stats aggregation\");", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java", "status": "modified" }, { "diff": "@@ -87,7 +87,10 @@ public void collect(int doc, long bucket) throws IOException {\n \n @Override\n public double metric(long owningBucketOrd) {\n- return valuesSource == null ? 0 : sums.get(owningBucketOrd);\n+ if (valuesSource == null || owningBucketOrd >= sums.size()) {\n+ return 0.0;\n+ }\n+ return sums.get(owningBucketOrd);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java", "status": "modified" }, { "diff": "@@ -31,8 +31,11 @@\n import org.elasticsearch.script.ScriptModule;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.SearchScript;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order;\n import org.elasticsearch.search.aggregations.metrics.avg.Avg;\n import org.elasticsearch.search.lookup.LeafSearchLookup;\n import org.elasticsearch.search.lookup.SearchLookup;\n@@ -47,9 +50,12 @@\n import java.util.Map;\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.avg;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n@@ -317,6 +323,36 @@ public void testScriptMultiValuedWithParams() throws Exception {\n assertThat(avg.getValue(), equalTo((double) (3+4+4+5+5+6+6+7+7+8+8+9+9+10+10+11+11+12+12+13) / 20));\n }\n \n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(terms(\"terms\").field(\"value\").order(Order.compound(Order.aggregation(\"filter>avg\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100)).subAggregation(avg(\"avg\").field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ Avg avg = filter.getAggregations().get(\"avg\");\n+ assertThat(avg, notNullValue());\n+ assertThat(avg.value(), equalTo(Double.NaN));\n+\n+ }\n+ }\n+\n /**\n * Mock plugin for the {@link ExtractFieldScriptEngine}\n */", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/AvgIT.java", "status": "modified" }, { "diff": "@@ -31,8 +31,11 @@\n import org.elasticsearch.script.ScriptModule;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.SearchScript;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order;\n import org.elasticsearch.search.aggregations.metrics.sum.Sum;\n import org.elasticsearch.search.lookup.LeafSearchLookup;\n import org.elasticsearch.search.lookup.SearchLookup;\n@@ -47,9 +50,12 @@\n import java.util.Map;\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.sum;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.notNullValue;\n@@ -312,6 +318,36 @@ public void testMultiValuedFieldWithValueScriptWithParams() throws Exception {\n assertThat(sum.getValue(), equalTo((double) 2 + 3 + 3 + 4 + 4 + 5 + 5 + 6 + 6 + 7 + 7 + 8 + 8 + 9 + 9 + 10 + 10 + 11 + 11 + 12));\n }\n \n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(terms(\"terms\").field(\"value\").order(Order.compound(Order.aggregation(\"filter>sum\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100)).subAggregation(sum(\"sum\").field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ Sum sum = filter.getAggregations().get(\"sum\");\n+ assertThat(sum, notNullValue());\n+ assertThat(sum.value(), equalTo(0.0));\n+\n+ }\n+ }\n+\n /**\n * Mock plugin for the {@link ExtractFieldScriptEngine}\n */", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/SumIT.java", "status": "modified" }, { "diff": "@@ -24,20 +24,26 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.groovy.GroovyPlugin;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.missing.Missing;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;\n+import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats.Bounds;\n \n import java.util.Collection;\n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.extendedStats;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.missing;\n@@ -538,6 +544,45 @@ public void testEmptySubAggregation() {\n }\n }\n \n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(terms(\"terms\").field(\"value\").order(Order.compound(Order.aggregation(\"filter>extendedStats.avg\", true)))\n+ .subAggregation(\n+ filter(\"filter\", termQuery(\"value\", 100)).subAggregation(extendedStats(\"extendedStats\").field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ ExtendedStats extendedStats = filter.getAggregations().get(\"extendedStats\");\n+ assertThat(extendedStats, notNullValue());\n+ assertThat(extendedStats.getMin(), equalTo(Double.POSITIVE_INFINITY));\n+ assertThat(extendedStats.getMax(), equalTo(Double.NEGATIVE_INFINITY));\n+ assertThat(extendedStats.getAvg(), equalTo(Double.NaN));\n+ assertThat(extendedStats.getSum(), equalTo(0.0));\n+ assertThat(extendedStats.getCount(), equalTo(0L));\n+ assertThat(extendedStats.getStdDeviation(), equalTo(Double.NaN));\n+ assertThat(extendedStats.getSumOfSquares(), equalTo(0.0));\n+ assertThat(extendedStats.getVariance(), equalTo(Double.NaN));\n+ assertThat(extendedStats.getStdDeviationBound(Bounds.LOWER), equalTo(Double.NaN));\n+ assertThat(extendedStats.getStdDeviationBound(Bounds.UPPER), equalTo(Double.NaN));\n+\n+ }\n+ }\n \n private void assertShardExecutionState(SearchResponse response, int expectedFailures) throws Exception {\n ShardSearchFailure[] failures = response.getShardFailures();", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/ExtendedStatsTests.java", "status": "modified" }, { "diff": "@@ -24,9 +24,11 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.groovy.GroovyPlugin;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Order;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;\n import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanks;\n@@ -41,9 +43,12 @@\n \n import static org.elasticsearch.common.util.CollectionUtils.iterableAsArrayList;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.percentileRanks;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n@@ -463,4 +468,35 @@ public void testOrderBySubAggregation() {\n }\n }\n \n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(terms(\"terms\").field(\"value\").order(Terms.Order.compound(Terms.Order.aggregation(\"filter>ranks.99\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100))\n+ .subAggregation(percentileRanks(\"ranks\").method(PercentilesMethod.HDR).values(99).field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ PercentileRanks ranks = filter.getAggregations().get(\"ranks\");\n+ assertThat(ranks, notNullValue());\n+ assertThat(ranks.percent(99), equalTo(Double.NaN));\n+\n+ }\n+ }\n+\n }", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/HDRPercentileRanksTests.java", "status": "modified" }, { "diff": "@@ -25,9 +25,11 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.groovy.GroovyPlugin;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Order;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;\n import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles;\n@@ -41,9 +43,12 @@\n import java.util.Map;\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.percentiles;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.closeTo;\n import static org.hamcrest.Matchers.equalTo;\n@@ -443,4 +448,37 @@ public void testOrderBySubAggregation() {\n }\n }\n \n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(\n+ terms(\"terms\").field(\"value\").order(Terms.Order.compound(Terms.Order.aggregation(\"filter>percentiles.99\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100))\n+ .subAggregation(percentiles(\"percentiles\").method(PercentilesMethod.HDR).field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ Percentiles percentiles = filter.getAggregations().get(\"percentiles\");\n+ assertThat(percentiles, notNullValue());\n+ assertThat(percentiles.percentile(99), equalTo(Double.NaN));\n+\n+ }\n+ }\n+\n }\n\\ No newline at end of file", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/HDRPercentilesTests.java", "status": "modified" }, { "diff": "@@ -23,20 +23,26 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.groovy.GroovyPlugin;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.max.Max;\n-\n import java.util.Collection;\n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.max;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.notNullValue;\n@@ -293,4 +299,34 @@ public void testScriptMultiValuedWithParams() throws Exception {\n assertThat(max.getName(), equalTo(\"max\"));\n assertThat(max.getValue(), equalTo(11.0));\n }\n+\n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(terms(\"terms\").field(\"value\").order(Order.compound(Order.aggregation(\"filter>max\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100)).subAggregation(max(\"max\").field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ Max max = filter.getAggregations().get(\"max\");\n+ assertThat(max, notNullValue());\n+ assertThat(max.value(), equalTo(Double.NEGATIVE_INFINITY));\n+\n+ }\n+ }\n }\n\\ No newline at end of file", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/MaxTests.java", "status": "modified" }, { "diff": "@@ -23,20 +23,27 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.groovy.GroovyPlugin;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.min.Min;\n \n import java.util.Collection;\n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.min;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.notNullValue;\n@@ -305,4 +312,34 @@ public void testScriptMultiValuedWithParams() throws Exception {\n assertThat(min.getName(), equalTo(\"min\"));\n assertThat(min.getValue(), equalTo(1.0));\n }\n+\n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(terms(\"terms\").field(\"value\").order(Order.compound(Order.aggregation(\"filter>min\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100)).subAggregation(min(\"min\").field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ Min min = filter.getAggregations().get(\"min\");\n+ assertThat(min, notNullValue());\n+ assertThat(min.value(), equalTo(Double.POSITIVE_INFINITY));\n+\n+ }\n+ }\n }\n\\ No newline at end of file", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/MinTests.java", "status": "modified" }, { "diff": "@@ -24,20 +24,27 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.groovy.GroovyPlugin;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.stats.Stats;\n \n import java.util.Collection;\n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.stats;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n@@ -399,6 +406,39 @@ public void testScriptMultiValuedWithParams() throws Exception {\n assertThat(stats.getCount(), equalTo(20L));\n }\n \n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(terms(\"terms\").field(\"value\").order(Order.compound(Order.aggregation(\"filter>stats.avg\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100)).subAggregation(stats(\"stats\").field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ Stats stats = filter.getAggregations().get(\"stats\");\n+ assertThat(stats, notNullValue());\n+ assertThat(stats.getMin(), equalTo(Double.POSITIVE_INFINITY));\n+ assertThat(stats.getMax(), equalTo(Double.NEGATIVE_INFINITY));\n+ assertThat(stats.getAvg(), equalTo(Double.NaN));\n+ assertThat(stats.getSum(), equalTo(0.0));\n+ assertThat(stats.getCount(), equalTo(0L));\n+\n+ }\n+ }\n \n private void assertShardExecutionState(SearchResponse response, int expectedFailures) throws Exception {\n ShardSearchFailure[] failures = response.getShardFailures();", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/StatsTests.java", "status": "modified" }, { "diff": "@@ -25,13 +25,16 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.groovy.GroovyPlugin;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Order;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;\n import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanks;\n import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanksAggregatorBuilder;\n+import org.elasticsearch.search.aggregations.metrics.percentiles.PercentilesMethod;\n \n import java.util.Arrays;\n import java.util.Collection;\n@@ -41,9 +44,12 @@\n import java.util.Map;\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.percentileRanks;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n@@ -425,4 +431,35 @@ public void testOrderBySubAggregation() {\n }\n }\n \n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(terms(\"terms\").field(\"value\").order(Terms.Order.compound(Terms.Order.aggregation(\"filter>ranks.99\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100))\n+ .subAggregation(percentileRanks(\"ranks\").method(PercentilesMethod.TDIGEST).values(99).field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ PercentileRanks ranks = filter.getAggregations().get(\"ranks\");\n+ assertThat(ranks, notNullValue());\n+ assertThat(ranks.percent(99), equalTo(Double.NaN));\n+\n+ }\n+ }\n+\n }\n\\ No newline at end of file", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/TDigestPercentileRanksTests.java", "status": "modified" }, { "diff": "@@ -25,13 +25,17 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.groovy.GroovyPlugin;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Order;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;\n import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles;\n import org.elasticsearch.search.aggregations.metrics.percentiles.PercentilesAggregatorBuilder;\n+import org.elasticsearch.search.aggregations.metrics.percentiles.PercentilesMethod;\n+\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n@@ -40,9 +44,12 @@\n import java.util.Map;\n \n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.percentiles;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n@@ -407,4 +414,36 @@ public void testOrderBySubAggregation() {\n previous = p99;\n }\n }\n+\n+ @Override\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(\n+ terms(\"terms\").field(\"value\").order(Terms.Order.compound(Terms.Order.aggregation(\"filter>percentiles.99\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100))\n+ .subAggregation(percentiles(\"percentiles\").method(PercentilesMethod.TDIGEST).field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ Percentiles percentiles = filter.getAggregations().get(\"percentiles\");\n+ assertThat(percentiles, notNullValue());\n+ assertThat(percentiles.percentile(99), equalTo(Double.NaN));\n+\n+ }\n+ }\n }\n\\ No newline at end of file", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/TDigestPercentilesTests.java", "status": "modified" }, { "diff": "@@ -97,4 +97,6 @@ public void setupSuiteScopeCluster() throws Exception {\n public abstract void testScriptMultiValued() throws Exception;\n \n public abstract void testScriptMultiValuedWithParams() throws Exception;\n+\n+ public abstract void testOrderByEmptyAggregation() throws Exception;\n }\n\\ No newline at end of file", "filename": "test/framework/src/main/java/org/elasticsearch/search/aggregations/metrics/AbstractNumericTestCase.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.3, 2.x, and master branches (probably, only checked 2.3 but I'm pretty sure)\n\n**JVM version**: \n\n```\nmanyair:elasticsearch-2.3.0 manybubbles$ java -version\njava version \"1.8.0_51\"\nJava(TM) SE Runtime Environment (build 1.8.0_51-b16)\nJava HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)\n```\n\n**OS version**:\n\n```\nmanyair:elasticsearch-2.3.0 manybubbles$ uname -mrs\nDarwin 14.5.0 x86_64\n```\n\n10.10.5 (14F1605)\n\n**Description of the problem including expected versus actual behavior**:\n\n``` bash\n# Build an index with some docs\ncurl -XDELETE localhost:9200/test\nfor i in $(seq 1 1000); do\n curl -XPOST localhost:9200/test/test -d'{\"tags\": [\"bannanas\"]}'\n echo\ndone\ncurl -XPOST localhost:9200/test/_refresh\n\n# This should be fast\ncurl -XPOST 'localhost:9200/test/_update_by_query?pretty&refresh' -d'{\n \"query\": {\n \"bool\": {\n \"must\": [ {\"match\": {\"tags\": \"bannanas\"}} ],\n \"must_not\": [ {\"match\": {\"tags\": \"chocolate\"}} ]\n }\n },\n \"script\": {\n \"inline\": \"ctx._source.tags += \\\"chocolate\\\"\"\n }\n}'\n\n# But repeat this and it tries to refresh all indexes.\ncurl -XPOST 'localhost:9200/test/_update_by_query?pretty&refresh' -d'{\n \"query\": {\n \"bool\": {\n \"must\": [ {\"match\": {\"tags\": \"bannanas\"}} ],\n \"must_not\": [ {\"match\": {\"tags\": \"chocolate\"}} ]\n }\n },\n \"script\": {\n \"inline\": \"ctx._source.tags += \\\"chocolate\\\"\"\n }\n}'\n```\n\nThat last command should instead refresh no indices. It is kind of hard to tell if it has refreshed all indices or none other than the it takes longer. On my laptop with the test data I have loaded it takes 60 seconds to refresh all indices (that seems like a long time but that is another issue). If you leave the `refresh` off of the url then update-by-query completes in 2 milliseconds.\n", "comments": [], "number": 17296, "title": "Reindex and update-by-query \"refresh\" url parameter can cause a refresh for all indices" }
{ "body": "If the user asks for a refresh but their reindex or update-by-query\noperation touched no indexes we should just skip the resfresh call\nentirely. Without this commit we refresh _all_ indexes which is totally\nwrong.\n\nCloses #17296\n", "number": 17298, "review_comments": [ { "body": "Just curious, but when would `destinationIndices` be empty? Wouldn't you always re-index into an index?\n", "created_at": "2016-03-23T20:19:11Z" }, { "body": "If you give it a query that filters out all the docs. Or you use a script to convert all the changes to noops.\n\n`destinationIndices` is updated based on the generated IndexRequests so it can handle sneaky things like scripts changing the destination index. I suspect it is just a bonus that we can skip the refresh on noops.\n", "created_at": "2016-03-23T20:29:12Z" }, { "body": "That makes sense, thanks for the explanation!\n", "created_at": "2016-03-23T20:42:16Z" }, { "body": "Thanks for asking! I do stupid things from time to time and it is really useful to be asked these kinds of questions.\n", "created_at": "2016-03-23T20:52:31Z" } ], "title": "Reindex shouldn't attempt to refresh on noops" }
{ "commits": [ { "message": "[reindex] Don't attempt to refresh on noop\n\nIf the user asks for a refresh but their reindex or update-by-query\noperation touched no indexes we should just skip the resfresh call\nentirely. Without this commit we refresh *all* indexes which is totally\nwrong.\n\nCloses #17296" } ], "files": [ { "diff": "@@ -354,7 +354,7 @@ private void recordFailure(Failure failure, List<Failure> failures) {\n * Start terminating a request that finished non-catastrophically.\n */\n void startNormalTermination(List<Failure> indexingFailures, List<ShardSearchFailure> searchFailures, boolean timedOut) {\n- if (task.isCancelled() || false == mainRequest.isRefresh()) {\n+ if (task.isCancelled() || false == mainRequest.isRefresh() || destinationIndices.isEmpty()) {\n finishHim(null, indexingFailures, searchFailures, timedOut);\n return;\n }", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractAsyncBulkByScrollAction.java", "status": "modified" }, { "diff": "@@ -458,23 +458,29 @@ public void testDefaultRetryTimes() {\n }\n \n public void testRefreshIsFalseByDefault() throws Exception {\n- refreshTestCase(null, false);\n+ refreshTestCase(null, true, false);\n }\n \n- public void testRefreshFalseDoesntMakeVisible() throws Exception {\n- refreshTestCase(false, false);\n+ public void testRefreshFalseDoesntExecuteRefresh() throws Exception {\n+ refreshTestCase(false, true, false);\n }\n \n- public void testRefreshTrueMakesVisible() throws Exception {\n- refreshTestCase(true, true);\n+ public void testRefreshTrueExecutesRefresh() throws Exception {\n+ refreshTestCase(true, true, true);\n }\n \n- private void refreshTestCase(Boolean refresh, boolean shouldRefresh) {\n+ public void testRefreshTrueSkipsRefreshIfNoDestinationIndexes() throws Exception {\n+ refreshTestCase(true, false, false);\n+ }\n+\n+ private void refreshTestCase(Boolean refresh, boolean addDestinationIndexes, boolean shouldRefresh) {\n if (refresh != null) {\n mainRequest.setRefresh(refresh);\n }\n DummyAbstractAsyncBulkByScrollAction action = new DummyAbstractAsyncBulkByScrollAction();\n- action.addDestinationIndices(singleton(\"foo\"));\n+ if (addDestinationIndexes) {\n+ action.addDestinationIndices(singleton(\"foo\"));\n+ }\n action.startNormalTermination(emptyList(), emptyList(), false);\n if (shouldRefresh) {\n assertArrayEquals(new String[] {\"foo\"}, client.lastRefreshRequest.get().indices());", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/AsyncBulkByScrollActionTests.java", "status": "modified" } ] }
{ "body": "We have a simple upgrade test that is failing frequently right now in master. The test is rather simple. Startup Elasticsearch 2.0.0, index some documents, stop Elasticsearch, upgrade to master, start Elasticsearch back up, and then check that those indexed documents are there. In these failures, Elasticsearch successfully starts back up, relocates the indices to the new folder structure, goes yellow, yet gives mysterious shard not found exceptions after get requests are issued against those documents. Here are the logs from a failed run:\n\n```\n[2016-03-22 16:23:40,311][INFO ][node ] [Manbot] version[2.0.0], pid[32374], build[de54438/2015-10-22T08:09:48Z]\n[2016-03-22 16:23:40,311][INFO ][node ] [Manbot] initializing ...\n[2016-03-22 16:23:40,366][INFO ][plugins ] [Manbot] loaded [], sites []\n[2016-03-22 16:23:40,441][INFO ][env ] [Manbot] using [1] data paths, mounts [[/ (/dev/mapper/fedora-root)]], net usable_space [15.7gb], net total_space [17.4gb], spins? [possibly], types [xfs]\n[2016-03-22 16:23:42,137][INFO ][node ] [Manbot] initialized\n[2016-03-22 16:23:42,137][INFO ][node ] [Manbot] starting ...\n[2016-03-22 16:23:42,178][INFO ][transport ] [Manbot] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}\n[2016-03-22 16:23:42,194][INFO ][discovery ] [Manbot] elasticsearch/_QbU4s4ISjWDgtZHjdbqJg\n[2016-03-22 16:23:45,321][INFO ][cluster.service ] [Manbot] new_master {Manbot}{_QbU4s4ISjWDgtZHjdbqJg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)\n[2016-03-22 16:23:45,387][INFO ][http ] [Manbot] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}\n[2016-03-22 16:23:45,387][INFO ][node ] [Manbot] started\n[2016-03-22 16:23:45,393][INFO ][gateway ] [Manbot] recovered [0] indices into cluster_state\n[2016-03-22 16:23:46,369][INFO ][cluster.metadata ] [Manbot] [library] creating index, cause [auto(index api)], templates [], shards [5]/[1], mappings [book]\n[2016-03-22 16:23:46,693][INFO ][cluster.metadata ] [Manbot] [library] update_mapping [book]\n[2016-03-22 16:23:46,791][INFO ][cluster.metadata ] [Manbot] [library2] creating index, cause [auto(index api)], templates [], shards [5]/[1], mappings [book]\n[2016-03-22 16:23:46,926][INFO ][cluster.metadata ] [Manbot] [library2] update_mapping [book]\n[2016-03-22 16:23:47,107][INFO ][node ] [Manbot] stopping ...\n[2016-03-22 16:23:47,276][INFO ][node ] [Manbot] stopped\n[2016-03-22 16:23:47,276][INFO ][node ] [Manbot] closing ...\n[2016-03-22 16:23:47,279][INFO ][node ] [Manbot] closed\n[2016-03-22 16:23:48,660][WARN ][bootstrap ] max file descriptors [65535] for elasticsearch process likely too low, increase to at least [65536]\n[2016-03-22 16:23:48,677][INFO ][node ] [Mary \"Skeeter\" MacPherran] version[5.0.0-SNAPSHOT], pid[900], build[8004c51/2016-03-22T15:58:45.132Z]\n[2016-03-22 16:23:48,677][INFO ][node ] [Mary \"Skeeter\" MacPherran] initializing ...\n[2016-03-22 16:23:49,149][INFO ][plugins ] [Mary \"Skeeter\" MacPherran] modules [lang-mustache, lang-painless, ingest-grok, reindex, lang-expression, lang-groovy], plugins []\n[2016-03-22 16:23:49,173][INFO ][env ] [Mary \"Skeeter\" MacPherran] using [1] data paths, mounts [[/ (/dev/mapper/fedora-root)]], net usable_space [15.7gb], net total_space [17.4gb], spins? [possibly], types [xfs]\n[2016-03-22 16:23:49,173][INFO ][env ] [Mary \"Skeeter\" MacPherran] heap size [1015.6mb], compressed ordinary object pointers [true]\n[2016-03-22 16:23:51,515][INFO ][common.util ] [library/y0Jj3USfQpOkx0zpC88ObQ] upgrading [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/library] to new naming convention\n[2016-03-22 16:23:51,516][INFO ][common.util ] [library/y0Jj3USfQpOkx0zpC88ObQ] moved from [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/library] to [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/y0Jj3USfQpOkx0zpC88ObQ]\n[2016-03-22 16:23:51,519][INFO ][common.util ] [library2/6Vuo4MIiQvGenbpxkLfi2Q] upgrading [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/library2] to new naming convention\n[2016-03-22 16:23:51,519][INFO ][common.util ] [library2/6Vuo4MIiQvGenbpxkLfi2Q] moved from [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/library2] to [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/6Vuo4MIiQvGenbpxkLfi2Q]\n[2016-03-22 16:23:51,605][INFO ][node ] [Mary \"Skeeter\" MacPherran] initialized\n[2016-03-22 16:23:51,609][INFO ][node ] [Mary \"Skeeter\" MacPherran] starting ...\n[2016-03-22 16:23:51,683][INFO ][transport ] [Mary \"Skeeter\" MacPherran] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}\n[2016-03-22 16:23:54,818][INFO ][cluster.service ] [Mary \"Skeeter\" MacPherran] new_master {Mary \"Skeeter\" MacPherran}{D0lJ0rzOTmGhTEmHl1p8sQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)\n[2016-03-22 16:23:54,900][INFO ][http ] [Mary \"Skeeter\" MacPherran] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}\n[2016-03-22 16:23:54,901][INFO ][node ] [Mary \"Skeeter\" MacPherran] started\n[2016-03-22 16:23:55,060][INFO ][gateway ] [Mary \"Skeeter\" MacPherran] recovered [2] indices into cluster_state\n[2016-03-22 16:23:55,372][WARN ][rest.suppressed ] /library/book/1 Params: {pretty=, index=library, id=1, type=book}\nNoShardAvailableActionException[No shard available for [get [library][book][1]: routing [null]]]\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:205)\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.start(TransportSingleShardAction.java:184)\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:93)\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:57)\n at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:150)\n at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:174)\n at org.elasticsearch.action.ingest.IngestActionFilter.apply(IngestActionFilter.java:80)\n at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:172)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:145)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:87)\n at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:64)\n at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:402)\n at org.elasticsearch.client.support.AbstractClient.get(AbstractClient.java:494)\n at org.elasticsearch.rest.action.get.RestGetAction.handleRequest(RestGetAction.java:79)\n at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:51)\n at org.elasticsearch.rest.RestController.executeHandler(RestController.java:214)\n at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:174)\n at org.elasticsearch.http.HttpServer.dispatchRequest(HttpServer.java:101)\n at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:487)\n at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:65)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:85)\n at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:83)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2016-03-22 16:23:55,561][INFO ][node ] [Mary \"Skeeter\" MacPherran] stopping ...\n[2016-03-22 16:23:55,595][WARN ][cluster.action.shard ] [Mary \"Skeeter\" MacPherran] [library][2] unexpected failure while sending request [internal:cluster/shard/started] to [{Mary \"Skeeter\" MacPherran}{D0lJ0rzOTmGhTEmHl1p8sQ}{127.0.0.1}{127.0.0.1:9300}] for shard [target shard [[[library/y0Jj3USfQpOkx0zpC88ObQ]][2], node[D0lJ0rzOTmGhTEmHl1p8sQ], [P], s[INITIALIZING], a[id=Mjw1dR_XTDO2NZ4Oqm9nGA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-22T16:23:54.974Z]]], source shard [[[library/y0Jj3USfQpOkx0zpC88ObQ]][2], node[D0lJ0rzOTmGhTEmHl1p8sQ], [P], s[INITIALIZING], a[id=Mjw1dR_XTDO2NZ4Oqm9nGA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-22T16:23:54.974Z]]], message [after recovery from store]]\nSendRequestTransportException[[Mary \"Skeeter\" MacPherran][127.0.0.1:9300][internal:cluster/shard/started]]; nested: TransportException[TransportService is closed stopped can't send request];\n at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:331)\n at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:290)\n at org.elasticsearch.cluster.action.shard.ShardStateAction.sendShardAction(ShardStateAction.java:101)\n at org.elasticsearch.cluster.action.shard.ShardStateAction.shardStarted(ShardStateAction.java:333)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.lambda$applyInitializingShard$2(IndicesClusterStateService.java:627)\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:408)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: TransportException[TransportService is closed stopped can't send request]\n at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:311)\n ... 8 more\n[2016-03-22 16:23:55,620][INFO ][node ] [Mary \"Skeeter\" MacPherran] stopped\n[2016-03-22 16:23:55,620][INFO ][node ] [Mary \"Skeeter\" MacPherran] closing ...\n[2016-03-22 16:23:55,631][INFO ][node ] [Mary \"Skeeter\" MacPherran] closed\n```\n", "comments": [ { "body": "These logs still seemingly indicate an odd issue here, but it's not reliably reproducible so I'm closing this until it surfaces again.\n", "created_at": "2016-03-25T02:01:41Z" } ], "number": 17294, "title": "Vagrant upgrade test failing after relocating indexes to new folder structure" }
{ "body": "This commit makes the Vagrant upgrade test wait for yellow indices\nbefore attempting to get documents from the upgraded Elasticsearch node.\n\nRelates #17294 \n", "number": 17297, "review_comments": [], "title": "Wait for yellow indices when running upgrade test" }
{ "commits": [ { "message": "Wait for yellow indices when running upgrade test\n\nThis commit makes the Vagrant upgrade test wait for yellow indices\nbefore attempting to get documents from the upgraded Elasticsearch node." } ], "files": [ { "diff": "@@ -83,7 +83,8 @@ setup() {\n }\n \n @test \"[UPGRADE] start version under test\" {\n- start_elasticsearch_service yellow\n+ start_elasticsearch_service yellow library\n+ wait_for_elasticsearch_status yellow library2\n }\n \n @test \"[UPGRADE] check elasticsearch version is version under test\" {", "filename": "qa/vagrant/src/test/resources/packaging/scripts/80_upgrade.bats", "status": "modified" }, { "diff": "@@ -270,10 +270,11 @@ clean_before_test() {\n # $1 - expected status - defaults to green\n start_elasticsearch_service() {\n local desiredStatus=${1:-green}\n+ local index=$2\n \n run_elasticsearch_service 0\n \n- wait_for_elasticsearch_status $desiredStatus\n+ wait_for_elasticsearch_status $desiredStatus $index\n \n if [ -r \"/tmp/elasticsearch/elasticsearch.pid\" ]; then\n pid=$(cat /tmp/elasticsearch/elasticsearch.pid)\n@@ -382,6 +383,7 @@ stop_elasticsearch_service() {\n # $1 - expected status - defaults to green\n wait_for_elasticsearch_status() {\n local desiredStatus=${1:-green}\n+ local index=$2\n \n echo \"Making sure elasticsearch is up...\"\n wget -O - --retry-connrefused --waitretry=1 --timeout=60 --tries 60 http://localhost:9200 || {\n@@ -395,8 +397,13 @@ wait_for_elasticsearch_status() {\n false\n }\n \n- echo \"Tring to connect to elasticsearch and wait for expected status...\"\n- curl -sS \"http://localhost:9200/_cluster/health?wait_for_status=$desiredStatus&timeout=60s&pretty\"\n+ if [ -z \"index\" ]; then\n+ echo \"Tring to connect to elasticsearch and wait for expected status $desiredStatus...\"\n+ curl -sS \"http://localhost:9200/_cluster/health?wait_for_status=$desiredStatus&timeout=60s&pretty\"\n+ else\n+ echo \"Trying to connect to elasticsearch and wait for expected status $desiredStatus for index $index\"\n+ curl -sS \"http://localhost:9200/_cluster/$index/health?wait_for_status=$desiredStatus&timeout=60s&pretty\"\n+ fi\n if [ $? -eq 0 ]; then\n echo \"Connected\"\n else", "filename": "qa/vagrant/src/test/resources/packaging/scripts/packaging_test_utils.bash", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 5.0.0 (master)\n\n**JVM version**: 1.8\n\n**OS version**: OSX 10.11.3\n\n**Description of the problem including expected versus actual behavior**:\n\nWhile updating the Kibana functional tests to deal with the deprecation of the `string` mapping type, I tried to create a dynamic template matching `text` fields with a multi-field mapping as suggested in https://github.com/elastic/elasticsearch/issues/12394\n\nMy dynamic template looks like this:\n\n```\n\"dynamic_templates\": [{\n \"text_fields\": {\n \"mapping\": {\n \"type\": \"text\",\n \"fields\": {\n \"raw\": {\n \"type\": \"keyword\"\n }\n }\n },\n \"match_mapping_type\": \"text\",\n \"match\": \"*\"\n }\n }]\n```\n\nWhen posting documents with text fields to this index, the [fieldName].raw field does not get created.\n\n**Steps to reproduce**:\n1. Create an index with the above dynamic template.\n2. Index a document that has a text field.\n3. Query the field mappings `localhost:9200/<index_name>/_mapping/*/field/*?include_defaults=true`\n4. Notice that the [fieldName].raw field is missing\n\nI tried the same dynamic template with `\"match_mapping_type\": \"*\"` instead of `text`, and the raw field appeared as expected. I also tried the same multi field mapping on a regular field instead of a template, and that also worked. So this seems to be a problem with the combination of `text` fields and dynamic templates in particular. \n", "comments": [ { "body": "I think the match mapping type needs to be string? That is the \"type\" in json, which hasnt changed. \n", "created_at": "2016-03-04T03:38:31Z" }, { "body": "Which raises the issue, we should validate the value of that setting for each template. We should spin a separate issue for that (probably generally about dynamic template validation as I dont think we do any right now).\n", "created_at": "2016-03-04T03:40:35Z" }, { "body": "+1 @rjernst \n", "created_at": "2016-03-04T09:42:54Z" }, { "body": "Confirmed `string` in the match mapping type works. That's pretty confusing for the user though. The `match_mapping_type` docs read: \n\n> match_mapping_type matches on the datatype detected by dynamic field mapping, in other words, the datatype that Elasticsearch thinks the field should have.\n\nThe dynamic field mapping creates `text` fields now, so I'd expect that to be a valid value. If we have to stick with string, we should at least update the match_mapping_type docs to explain this.\n", "created_at": "2016-03-04T15:07:57Z" }, { "body": "I agree with @Bargs, plus it's inconsistent in that we _already_ allow `long` and `double` to be the matching type, which technically don't exist in JSON (which uses `number`).\n", "created_at": "2016-03-04T18:28:31Z" }, { "body": "@jpountz when we discussed this, I think we decided on using `text` as the detected type?\n", "created_at": "2016-03-04T19:35:32Z" }, { "body": "We actually did use `text`, but there must still be uses of the other template methods I mentioned here: https://github.com/elastic/elasticsearch/pull/16877#discussion_r54749987\n", "created_at": "2016-03-04T20:09:33Z" }, { "body": "@clintongormley Templates have a concept of a match type, which is used to match templates, and a dynamic type, which is the elasticsearch field type to use by default if no type is specified. When we discussed it, we indeed agreed on using text as the default dynamic type but for now we are still using \"string\" as the match type. I'd be fine to switch to text if we think this makes more sense, but I agree with Ryan that even more importantly we should validate templates better so that adding templates with an unknown match type would be rejected since it means those templates have no chance to be ever used.\n", "created_at": "2016-03-05T14:20:59Z" }, { "body": "Closed by #17285\n", "created_at": "2016-10-11T13:51:58Z" } ], "number": 16945, "title": "Defining a multi field in a dynamic template matching the new text type doesn't work" }
{ "body": "When looking at the logstash template, I noticed that it has definitions for\ndynamic temilates with `match_mapping_type` equal to `byte` for instance.\nHowever elasticsearch never tries to find templates that match the byte type\n(only long or double as far as numbers are concerned). This commit changes\ntemplate parsing in order to ignore bad values of `match_mapping_type` (given\nhow the logstash template is popular, this would break many upgrades\notherwise). Then I hope to fail the parsing on bad values in 6.0.\n\nRelates to #16945\n", "number": 17285, "review_comments": [], "title": "Elasticsearch should reject dynamic templates with unknown `match_mapping_type`." }
{ "commits": [ { "message": "Elasticsearch should reject dynamic templates with unknown `match_mapping_type`. #17285\n\nWhen looking at the logstash template, I noticed that it has definitions for\ndynamic temilates with `match_mapping_type` equal to `byte` for instance.\nHowever elasticsearch never tries to find templates that match the byte type\n(only long or double as far as numbers are concerned). This commit changes\ntemplate parsing in order to ignore bad values of `match_mapping_type` (given\nhow the logstash template is popular, this would break many upgrades\notherwise). Then I hope to fail the parsing on bad values in 6.0." } ], "files": [ { "diff": "@@ -43,6 +43,7 @@\n import org.elasticsearch.index.mapper.internal.TypeFieldMapper;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.index.mapper.object.ArrayValueMapperParser;\n+import org.elasticsearch.index.mapper.object.DynamicTemplate.XContentFieldType;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n \n import java.io.IOException;\n@@ -471,7 +472,7 @@ private static ObjectMapper parseObject(final ParseContext context, ObjectMapper\n if (dynamic == ObjectMapper.Dynamic.STRICT) {\n throw new StrictDynamicMappingException(mapper.fullPath(), currentFieldName);\n } else if (dynamic == ObjectMapper.Dynamic.TRUE) {\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"object\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.OBJECT);\n if (builder == null) {\n builder = new ObjectMapper.Builder(currentFieldName).enabled(true);\n }\n@@ -516,7 +517,7 @@ private static void parseArray(ParseContext context, ObjectMapper parentMapper,\n if (dynamic == ObjectMapper.Dynamic.STRICT) {\n throw new StrictDynamicMappingException(parentMapper.fullPath(), arrayFieldName);\n } else if (dynamic == ObjectMapper.Dynamic.TRUE) {\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, arrayFieldName, \"object\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, arrayFieldName, XContentFieldType.OBJECT);\n if (builder == null) {\n parseNonDynamicArray(context, parentMapper, lastFieldName, arrayFieldName);\n } else {\n@@ -596,34 +597,34 @@ private static void parseNullValue(ParseContext context, ObjectMapper parentMapp\n private static Mapper.Builder<?,?> createBuilderFromFieldType(final ParseContext context, MappedFieldType fieldType, String currentFieldName) {\n Mapper.Builder builder = null;\n if (fieldType instanceof StringFieldType) {\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"string\", \"string\");\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"string\", XContentFieldType.STRING);\n } else if (fieldType instanceof TextFieldType) {\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"text\", \"string\");\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"text\", XContentFieldType.STRING);\n if (builder == null) {\n builder = new TextFieldMapper.Builder(currentFieldName)\n .addMultiField(new KeywordFieldMapper.Builder(\"keyword\").ignoreAbove(256));\n }\n } else if (fieldType instanceof KeywordFieldType) {\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"keyword\", \"string\");\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"keyword\", XContentFieldType.STRING);\n } else {\n switch (fieldType.typeName()) {\n case DateFieldMapper.CONTENT_TYPE:\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"date\");\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.DATE);\n break;\n case \"long\":\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"long\");\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"long\", XContentFieldType.LONG);\n break;\n case \"double\":\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"double\");\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"double\", XContentFieldType.DOUBLE);\n break;\n case \"integer\":\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"integer\");\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"integer\", XContentFieldType.LONG);\n break;\n case \"float\":\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"float\");\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"float\", XContentFieldType.DOUBLE);\n break;\n case BooleanFieldMapper.CONTENT_TYPE:\n- builder = context.root().findTemplateBuilder(context, currentFieldName, \"boolean\");\n+ builder = context.root().findTemplateBuilder(context, currentFieldName, \"boolean\", XContentFieldType.BOOLEAN);\n break;\n default:\n break;\n@@ -682,7 +683,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n for (FormatDateTimeFormatter dateTimeFormatter : context.root().dynamicDateTimeFormatters()) {\n try {\n dateTimeFormatter.parser().parseMillis(text);\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"date\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.DATE);\n if (builder == null) {\n builder = newDateBuilder(currentFieldName, dateTimeFormatter, Version.indexCreated(context.indexSettings()));\n }\n@@ -697,7 +698,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n String text = context.parser().text();\n try {\n Long.parseLong(text);\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"long\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.LONG);\n if (builder == null) {\n builder = newLongBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n@@ -707,7 +708,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n }\n try {\n Double.parseDouble(text);\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"double\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.DOUBLE);\n if (builder == null) {\n builder = newFloatBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n@@ -716,7 +717,7 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n // not a long number\n }\n }\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"string\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.STRING);\n if (builder == null) {\n builder = new TextFieldMapper.Builder(currentFieldName)\n .addMultiField(new KeywordFieldMapper.Builder(\"keyword\").ignoreAbove(256));\n@@ -725,13 +726,13 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n } else if (token == XContentParser.Token.VALUE_NUMBER) {\n XContentParser.NumberType numberType = context.parser().numberType();\n if (numberType == XContentParser.NumberType.INT || numberType == XContentParser.NumberType.LONG) {\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"long\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.LONG);\n if (builder == null) {\n builder = newLongBuilder(currentFieldName, Version.indexCreated(context.indexSettings()));\n }\n return builder;\n } else if (numberType == XContentParser.NumberType.FLOAT || numberType == XContentParser.NumberType.DOUBLE) {\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"double\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.DOUBLE);\n if (builder == null) {\n // no templates are defined, we use float by default instead of double\n // since this is much more space-efficient and should be enough most of\n@@ -741,19 +742,19 @@ private static Mapper.Builder<?,?> createBuilderFromDynamicValue(final ParseCont\n return builder;\n }\n } else if (token == XContentParser.Token.VALUE_BOOLEAN) {\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"boolean\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.BOOLEAN);\n if (builder == null) {\n builder = new BooleanFieldMapper.Builder(currentFieldName);\n }\n return builder;\n } else if (token == XContentParser.Token.VALUE_EMBEDDED_OBJECT) {\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"binary\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.BINARY);\n if (builder == null) {\n builder = new BinaryFieldMapper.Builder(currentFieldName);\n }\n return builder;\n } else {\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, null);\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.STRING);\n if (builder != null) {\n return builder;\n }\n@@ -858,7 +859,7 @@ private static Tuple<Integer, ObjectMapper> getDynamicParentMapper(ParseContext\n case STRICT:\n throw new StrictDynamicMappingException(parent.fullPath(), paths[i]);\n case TRUE:\n- Mapper.Builder builder = context.root().findTemplateBuilder(context, paths[i], \"object\");\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, paths[i], XContentFieldType.OBJECT);\n if (builder == null) {\n builder = new ObjectMapper.Builder(paths[i]).enabled(true);\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -20,15 +20,21 @@\n package org.elasticsearch.index.mapper.object;\n \n import org.elasticsearch.Version;\n-import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.index.mapper.ContentPath;\n import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.core.BinaryFieldMapper;\n+import org.elasticsearch.index.mapper.core.BooleanFieldMapper;\n+import org.elasticsearch.index.mapper.core.DateFieldMapper;\n+import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n+import org.elasticsearch.index.mapper.core.TextFieldMapper;\n \n import java.io.IOException;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n@@ -39,6 +45,8 @@\n */\n public class DynamicTemplate implements ToXContent {\n \n+ private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(DynamicTemplate.class));\n+\n public static enum MatchType {\n SIMPLE {\n @Override\n@@ -74,6 +82,93 @@ public static MatchType fromString(String value) {\n public abstract boolean matches(String regex, String value);\n }\n \n+ /** The type of a field as detected while parsing a json document. */\n+ public enum XContentFieldType {\n+ OBJECT {\n+ @Override\n+ public String defaultMappingType() {\n+ return ObjectMapper.CONTENT_TYPE;\n+ }\n+ @Override\n+ public String toString() {\n+ return \"object\";\n+ }\n+ },\n+ STRING {\n+ @Override\n+ public String defaultMappingType() {\n+ return TextFieldMapper.CONTENT_TYPE;\n+ }\n+ @Override\n+ public String toString() {\n+ return \"string\";\n+ }\n+ },\n+ LONG {\n+ @Override\n+ public String defaultMappingType() {\n+ return NumberFieldMapper.NumberType.LONG.typeName();\n+ }\n+ @Override\n+ public String toString() {\n+ return \"long\";\n+ }\n+ },\n+ DOUBLE {\n+ @Override\n+ public String defaultMappingType() {\n+ return NumberFieldMapper.NumberType.FLOAT.typeName();\n+ }\n+ @Override\n+ public String toString() {\n+ return \"double\";\n+ }\n+ },\n+ BOOLEAN {\n+ @Override\n+ public String defaultMappingType() {\n+ return BooleanFieldMapper.CONTENT_TYPE;\n+ }\n+ @Override\n+ public String toString() {\n+ return \"boolean\";\n+ }\n+ },\n+ DATE {\n+ @Override\n+ public String defaultMappingType() {\n+ return DateFieldMapper.CONTENT_TYPE;\n+ }\n+ @Override\n+ public String toString() {\n+ return \"date\";\n+ }\n+ },\n+ BINARY {\n+ @Override\n+ public String defaultMappingType() {\n+ return BinaryFieldMapper.CONTENT_TYPE;\n+ }\n+ @Override\n+ public String toString() {\n+ return \"binary\";\n+ }\n+ };\n+\n+ public static XContentFieldType fromString(String value) {\n+ for (XContentFieldType v : values()) {\n+ if (v.toString().equals(value)) {\n+ return v;\n+ }\n+ }\n+ throw new IllegalArgumentException(\"No xcontent type matched on [\" + value + \"], possible values are \"\n+ + Arrays.toString(values()));\n+ }\n+\n+ /** The default mapping type to use for fields of this {@link XContentFieldType}. */\n+ public abstract String defaultMappingType();\n+ }\n+\n public static DynamicTemplate parse(String name, Map<String, Object> conf,\n Version indexVersionCreated) throws MapperParsingException {\n String match = null;\n@@ -107,7 +202,30 @@ public static DynamicTemplate parse(String name, Map<String, Object> conf,\n }\n }\n \n- return new DynamicTemplate(name, pathMatch, pathUnmatch, match, unmatch, matchMappingType, MatchType.fromString(matchPattern), mapping);\n+ if (match == null && pathMatch == null && matchMappingType == null) {\n+ throw new MapperParsingException(\"template must have match, path_match or match_mapping_type set \" + conf.toString());\n+ }\n+ if (mapping == null) {\n+ throw new MapperParsingException(\"template must have mapping set\");\n+ }\n+\n+ XContentFieldType xcontentFieldType = null;\n+ if (matchMappingType != null && matchMappingType.equals(\"*\") == false) {\n+ try {\n+ xcontentFieldType = XContentFieldType.fromString(matchMappingType);\n+ } catch (IllegalArgumentException e) {\n+ // TODO: do this in 6.0\n+ /*if (indexVersionCreated.onOrAfter(Version.V_6_0_0)) {\n+ throw e;\n+ }*/\n+\n+ DEPRECATION_LOGGER.deprecated(\"Ignoring unrecognized match_mapping_type: [\" + matchMappingType + \"]\");\n+ // this template is on an unknown type so it will never match anything\n+ // null indicates that the template should be ignored\n+ return null;\n+ }\n+ }\n+ return new DynamicTemplate(name, pathMatch, pathUnmatch, match, unmatch, xcontentFieldType, MatchType.fromString(matchPattern), mapping);\n }\n \n private final String name;\n@@ -122,51 +240,41 @@ public static DynamicTemplate parse(String name, Map<String, Object> conf,\n \n private final MatchType matchType;\n \n- private final String matchMappingType;\n+ private final XContentFieldType xcontentFieldType;\n \n private final Map<String, Object> mapping;\n \n- public DynamicTemplate(String name, String pathMatch, String pathUnmatch, String match, String unmatch, String matchMappingType, MatchType matchType, Map<String, Object> mapping) {\n- if (match == null && pathMatch == null && matchMappingType == null) {\n- throw new MapperParsingException(\"template must have match, path_match or match_mapping_type set\");\n- }\n- if (mapping == null) {\n- throw new MapperParsingException(\"template must have mapping set\");\n- }\n+ private DynamicTemplate(String name, String pathMatch, String pathUnmatch, String match, String unmatch,\n+ XContentFieldType xcontentFieldType, MatchType matchType, Map<String, Object> mapping) {\n this.name = name;\n this.pathMatch = pathMatch;\n this.pathUnmatch = pathUnmatch;\n this.match = match;\n this.unmatch = unmatch;\n this.matchType = matchType;\n- this.matchMappingType = matchMappingType;\n+ this.xcontentFieldType = xcontentFieldType;\n this.mapping = mapping;\n }\n \n public String name() {\n return this.name;\n }\n \n- public boolean match(ContentPath path, String name, String dynamicType) {\n- if (pathMatch != null && !matchType.matches(pathMatch, path.pathAsText(name))) {\n+ public boolean match(String path, String name, XContentFieldType xcontentFieldType) {\n+ if (pathMatch != null && !matchType.matches(pathMatch, path)) {\n return false;\n }\n if (match != null && !matchType.matches(match, name)) {\n return false;\n }\n- if (pathUnmatch != null && matchType.matches(pathUnmatch, path.pathAsText(name))) {\n+ if (pathUnmatch != null && matchType.matches(pathUnmatch, path)) {\n return false;\n }\n if (unmatch != null && matchType.matches(unmatch, name)) {\n return false;\n }\n- if (matchMappingType != null) {\n- if (dynamicType == null) {\n- return false;\n- }\n- if (!matchType.matches(matchMappingType, dynamicType)) {\n- return false;\n- }\n+ if (this.xcontentFieldType != null && this.xcontentFieldType != xcontentFieldType) {\n+ return false;\n }\n return true;\n }\n@@ -248,8 +356,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n if (pathUnmatch != null) {\n builder.field(\"path_unmatch\", pathUnmatch);\n }\n- if (matchMappingType != null) {\n- builder.field(\"match_mapping_type\", matchMappingType);\n+ if (xcontentFieldType != null) {\n+ builder.field(\"match_mapping_type\", xcontentFieldType);\n+ } else if (match == null && pathMatch == null) {\n+ builder.field(\"match_mapping_type\", \"*\");\n }\n if (matchType != MatchType.SIMPLE) {\n builder.field(\"match_pattern\", matchType);", "filename": "core/src/main/java/org/elasticsearch/index/mapper/object/DynamicTemplate.java", "status": "modified" }, { "diff": "@@ -21,7 +21,6 @@\n \n import org.elasticsearch.Version;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.settings.Settings;\n@@ -33,6 +32,7 @@\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n+import org.elasticsearch.index.mapper.object.DynamicTemplate.XContentFieldType;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -190,7 +190,9 @@ protected boolean processField(ObjectMapper.Builder builder, String fieldName, O\n String templateName = entry.getKey();\n Map<String, Object> templateParams = (Map<String, Object>) entry.getValue();\n DynamicTemplate template = DynamicTemplate.parse(templateName, templateParams, indexVersionCreated);\n- ((Builder) builder).add(template);\n+ if (template != null) {\n+ ((Builder) builder).add(template);\n+ }\n }\n return true;\n } else if (fieldName.equals(\"date_detection\")) {\n@@ -240,21 +242,8 @@ public FormatDateTimeFormatter[] dynamicDateTimeFormatters() {\n return dynamicDateTimeFormatters;\n }\n \n- public Mapper.Builder findTemplateBuilder(ParseContext context, String name, String matchType) {\n- final String dynamicType;\n- switch (matchType) {\n- case \"string\":\n- // string is a corner case since a json string can either map to a\n- // text or keyword field in elasticsearch. For now we use text when\n- // unspecified. For other types, the mapping type matches the json\n- // type so we are fine\n- dynamicType = \"text\";\n- break;\n- default:\n- dynamicType = matchType;\n- break;\n- }\n- return findTemplateBuilder(context, name, dynamicType, matchType);\n+ public Mapper.Builder findTemplateBuilder(ParseContext context, String name, XContentFieldType matchType) {\n+ return findTemplateBuilder(context, name, matchType.defaultMappingType(), matchType);\n }\n \n /**\n@@ -264,7 +253,7 @@ public Mapper.Builder findTemplateBuilder(ParseContext context, String name, Str\n * @param matchType the type of the field in the json document or null if unknown\n * @return a mapper builder, or null if there is no template for such a field\n */\n- public Mapper.Builder findTemplateBuilder(ParseContext context, String name, String dynamicType, String matchType) {\n+ public Mapper.Builder findTemplateBuilder(ParseContext context, String name, String dynamicType, XContentFieldType matchType) {\n DynamicTemplate dynamicTemplate = findTemplate(context.path(), name, matchType);\n if (dynamicTemplate == null) {\n return null;\n@@ -278,9 +267,10 @@ public Mapper.Builder findTemplateBuilder(ParseContext context, String name, Str\n return typeParser.parse(name, dynamicTemplate.mappingForName(name, dynamicType), parserContext);\n }\n \n- private DynamicTemplate findTemplate(ContentPath path, String name, String matchType) {\n+ public DynamicTemplate findTemplate(ContentPath path, String name, XContentFieldType matchType) {\n+ final String pathAsString = path.pathAsText(name);\n for (DynamicTemplate dynamicTemplate : dynamicTemplates) {\n- if (dynamicTemplate.match(path, name, matchType)) {\n+ if (dynamicTemplate.match(pathAsString, name, matchType)) {\n return dynamicTemplate;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/object/RootObjectMapper.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.index.mapper.object.DynamicTemplate;\n+import org.elasticsearch.index.mapper.object.DynamicTemplate.XContentFieldType;\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.Collections;\n@@ -49,6 +50,31 @@ public void testParseUnknownParam() throws Exception {\n assertEquals(\"{\\\"match_mapping_type\\\":\\\"string\\\",\\\"mapping\\\":{\\\"store\\\":true}}\", builder.string());\n }\n \n+ public void testParseUnknownMatchType() {\n+ Map<String, Object> templateDef = new HashMap<>();\n+ templateDef.put(\"match_mapping_type\", \"short\");\n+ templateDef.put(\"mapping\", Collections.singletonMap(\"store\", true));\n+ // if a wrong match type is specified, we ignore the template\n+ assertNull(DynamicTemplate.parse(\"my_template\", templateDef, Version.V_5_0_0_alpha5));\n+ }\n+\n+ public void testMatchAllTemplate() {\n+ Map<String, Object> templateDef = new HashMap<>();\n+ templateDef.put(\"match_mapping_type\", \"*\");\n+ templateDef.put(\"mapping\", Collections.singletonMap(\"store\", true));\n+ DynamicTemplate template = DynamicTemplate.parse(\"my_template\", templateDef, Version.V_5_0_0_alpha5);\n+ assertTrue(template.match(\"a.b\", \"b\", randomFrom(XContentFieldType.values())));\n+ }\n+\n+ public void testMatchTypeTemplate() {\n+ Map<String, Object> templateDef = new HashMap<>();\n+ templateDef.put(\"match_mapping_type\", \"string\");\n+ templateDef.put(\"mapping\", Collections.singletonMap(\"store\", true));\n+ DynamicTemplate template = DynamicTemplate.parse(\"my_template\", templateDef, Version.V_5_0_0_alpha5);\n+ assertTrue(template.match(\"a.b\", \"b\", XContentFieldType.STRING));\n+ assertFalse(template.match(\"a.b\", \"b\", XContentFieldType.BOOLEAN));\n+ }\n+\n public void testSerialization() throws Exception {\n // type-based template\n Map<String, Object> templateDef = new HashMap<>();", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DynamicTemplateTests.java", "status": "modified" } ] }
{ "body": "Currently, `FiltersAggregator::buildEmptyAggregation` does not check if `other_bucket_key` was set on the request. This results in a missing `other` bucket in the response when no documents were matched. Is this intended? \n\nHere's a snippet from a date histogram agg with a filters agg.\n\n```\n{\n \"key_as_string\": \"2015-10-01 00:00:00.000\",\n \"key\": 1443657600000,\n \"doc_count\": 0,\n \"ex\": {\n \"buckets\": {\n \"a: {\n \"doc_count\": 0\n },\n \"b\": {\n \"doc_count\": 0\n },\n \"c\": {\n \"doc_count\": 0\n }\n }\n }\n},\n{\n \"key_as_string\": \"2015-11-01 00:00:00.000\",\n \"key\": 1446336000000,\n \"doc_count\": 5,\n \"ex\": {\n \"buckets\": {\n \"a\": {\n \"doc_count\": 0\n },\n \"b\": {\n \"doc_count\": 3\n },\n \"c\": {\n \"doc_count\": 2\n },\n \"other\": {\n \"doc_count\": 0\n }\n }\n }\n}\n\n\n\n```\n", "comments": [ { "body": "This is a bug and should be an easy fix. Thanks for raising this @pjo256 \n", "created_at": "2016-03-22T15:45:11Z" }, { "body": "Sorry @pjo256 I didn't see your PR before raising my own. Thanks for contributing, your PR looks good so I'll merge that one :smile: \n", "created_at": "2016-03-23T09:17:59Z" }, { "body": "@colings86 Thanks :smiley: \n", "created_at": "2016-03-23T15:59:42Z" } ], "number": 16546, "title": "Missing other bucket in FiltersAggregation" }
{ "body": "Previous to this commit empty buckets (with a doc count of zero) would not show the 'other' bucket in the filters aggregation. Now the buildEmptyBucket() method in FiltersAggregator checks to see if the other bucket is enabled when building an empty aggregation and adds it if it is enabled.\n\nCloses #16546\n", "number": 17271, "review_comments": [], "title": "Other bucket now shows if enabled on empty buckets" }
{ "commits": [ { "message": "Aggregations: Other bucket now shows if enabled on empty buckets\n\nPrevious to this commit empty buckets (with a doc count of zero) would not show the 'other' bucket in the filters aggregation. Now the buildEmptyBucket() method in FiltersAggregator checks to see if the other bucket is enabled when building an empty aggregation and adds it if it is enabled.\n\nCloses #16546" } ], "files": [ { "diff": "@@ -194,6 +194,11 @@ public InternalAggregation buildEmptyAggregation() {\n InternalFilters.InternalBucket bucket = new InternalFilters.InternalBucket(keys[i], 0, subAggs, keyed);\n buckets.add(bucket);\n }\n+ // other bucket\n+ if (showOtherBucket) {\n+ InternalFilters.InternalBucket bucket = new InternalFilters.InternalBucket(otherBucketKey, 0, subAggs, keyed);\n+ buckets.add(bucket);\n+ }\n return new InternalFilters(name, buckets, keyed, pipelineAggregators(), metaData());\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.search.aggregations.bucket.filters.Filters;\n import org.elasticsearch.search.aggregations.bucket.filters.FiltersAggregator.KeyedFilter;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Bucket;\n import org.elasticsearch.search.aggregations.metrics.avg.Avg;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.hamcrest.Matchers;\n@@ -312,6 +313,77 @@ public void testOtherBucket() throws Exception {\n assertThat(bucket.getDocCount(), equalTo((long) numOtherDocs));\n }\n \n+ public void testEmptyBucketWithOtherBucket() throws Exception {\n+ SearchResponse response = client().prepareSearch(\"empty_bucket_idx\")\n+ .addAggregation(histogram(\"histo\").interval(1).field(\"value\")\n+ .subAggregation(filters(\"foo\", new KeyedFilter(\"0\", termQuery(\"value\", 0))).otherBucket(true)))\n+ .execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(3));\n+\n+ Bucket histoBucket = buckets.get(0);\n+ assertThat(histoBucket, notNullValue());\n+ assertThat(histoBucket.getKey(), equalTo(0L));\n+ assertThat(histoBucket.getDocCount(), equalTo(1L));\n+\n+ Filters filters = histoBucket.getAggregations().get(\"foo\");\n+ assertThat(filters, notNullValue());\n+ assertThat(filters.getName(), equalTo(\"foo\"));\n+ assertThat(filters.getBuckets().size(), equalTo(2));\n+\n+ Filters.Bucket filtersBucket = filters.getBucketByKey(\"0\");\n+ assertThat(filtersBucket, Matchers.notNullValue());\n+ assertThat(filtersBucket.getDocCount(), equalTo(1L));\n+\n+ filtersBucket = filters.getBucketByKey(\"_other_\");\n+ assertThat(filtersBucket, Matchers.notNullValue());\n+ assertThat(filtersBucket.getDocCount(), equalTo(0L));\n+\n+ histoBucket = buckets.get(1);\n+ assertThat(histoBucket, notNullValue());\n+ assertThat(histoBucket.getKey(), equalTo(1L));\n+ assertThat(histoBucket.getDocCount(), equalTo(0L));\n+\n+ filters = histoBucket.getAggregations().get(\"foo\");\n+ assertThat(filters, notNullValue());\n+ assertThat(filters.getName(), equalTo(\"foo\"));\n+ assertThat(filters.getBuckets().size(), equalTo(2));\n+\n+ filtersBucket = filters.getBucketByKey(\"0\");\n+ assertThat(filtersBucket, Matchers.notNullValue());\n+ assertThat(filtersBucket.getDocCount(), equalTo(0L));\n+\n+ filtersBucket = filters.getBucketByKey(\"_other_\");\n+ assertThat(filtersBucket, Matchers.notNullValue());\n+ assertThat(filtersBucket.getDocCount(), equalTo(0L));\n+\n+ histoBucket = buckets.get(2);\n+ assertThat(histoBucket, notNullValue());\n+ assertThat(histoBucket.getKey(), equalTo(2L));\n+ assertThat(histoBucket.getDocCount(), equalTo(1L));\n+\n+ filters = histoBucket.getAggregations().get(\"foo\");\n+ assertThat(filters, notNullValue());\n+ assertThat(filters.getName(), equalTo(\"foo\"));\n+ assertThat(filters.getBuckets().size(), equalTo(2));\n+\n+ filtersBucket = filters.getBucketByKey(\"0\");\n+ assertThat(filtersBucket, Matchers.notNullValue());\n+ assertThat(filtersBucket.getDocCount(), equalTo(0L));\n+\n+ filtersBucket = filters.getBucketByKey(\"_other_\");\n+ assertThat(filtersBucket, Matchers.notNullValue());\n+ assertThat(filtersBucket.getDocCount(), equalTo(1L));\n+ }\n+\n public void testOtherNamedBucket() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\")\n .addAggregation(filters(\"tags\", new KeyedFilter(\"tag1\", termQuery(\"tag\", \"tag1\")),", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/FiltersIT.java", "status": "modified" } ] }
{ "body": "While investigating #17319, I had started two nodes built from master, and indexed some data. I thought I killed these nodes but one of them was still running. I started another node and it produced the following in its logs:\n\n```\n[2016-03-30 22:51:39,322][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:39,335][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:39,375][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:39,495][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:39,524][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:39,543][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:39,571][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:39,630][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:40,049][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:40,183][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:40,214][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:40,251][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n[2016-03-30 22:51:40,573][WARN ][gateway ] [Ani-Mator] [[i/CetR0D_FTdmYi5IcTE55EA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\n```\n\nI tried to reproduce this from a fresh cluster, but it did not immediately reproduce. I restored the shared data directory for this cluster from backup and the issue appeared again so there appears to be something special here.\n1. Package the elasticsearch tar from master\n2. Extract the tar into some location `/path/to/elasticsearch`\n3. Extract the attached [data.tar.gz](https://github.com/elastic/elasticsearch/files/196920/data.tar.gz) into `/path/to/elasticsearch/data`\n4. Start Elasticsearch twice from `/path/to/elasticsearch/bin/elasticsearch`\n5. :boom: \n", "comments": [ { "body": "I've verified that this happens locally for me as well. Will investigate.\n", "created_at": "2016-03-31T03:18:11Z" }, { "body": "@jasontedor These dangling indices import warnings are legitimate. The issue is that in the data directory for node 1, there are 3 indices, two of which have the same name `i`. One of them is `[i/bDwGVomkT3-CML0uG7o_tg]`, the other is `[i/CetR0D_FTdmYi5IcTE55EA]`. The question is, how was it possible to get two indices of the same name in the cluster state for node 1 to begin with?\n", "created_at": "2016-03-31T16:15:28Z" }, { "body": "> The issue is that in the data directory for node 1, there are 3 indices, two of which have the same name `i`.\n\nThat's not good.\n\n> The question is, how was it possible to get two indices of the same name in the cluster state for node 1 to begin with?\n\nThat is a very good question, this looks bad. :frowning: \n", "created_at": "2016-03-31T16:30:00Z" }, { "body": "I wonder if the index folder name upgrade that @areek worked on could have contributed?\n", "created_at": "2016-03-31T16:41:21Z" }, { "body": "> I wonder if the index folder name upgrade that @areek worked on could have contributed?\n\nI'm wondering that too, but I'm not sure how. To be clear, this was a fresh cluster and so not upgraded.\n", "created_at": "2016-03-31T16:48:00Z" }, { "body": "The following scenario reproduces this issue:\n1. Start a master node `M` and another node `D`.\n2. Create an index named `idx`.\n3. Shutdown node `D`.\n4. Delete index `idx`.\n5. Create again an index named `idx` (it will have a different uuid).\n6. Start back up node `D`.\n\nAt this point, node `D` will show the log messages above, because it has the original `idx` index on its disk and its trying to import it as a dangling index because its not part of the cluster state. It also leads to the same data directory as what @jasontedor linked to in this issue, with `M` having one index in its data directory and `D` having two.\n\nThis issue will be resolved by PR #17265 \n", "created_at": "2016-03-31T19:04:27Z" }, { "body": "Still happening on 5.0.2.", "created_at": "2016-12-07T14:26:27Z" }, { "body": "stilling happening 5.1.1", "created_at": "2017-01-13T23:52:32Z" }, { "body": "Showing up and only saying something still happening is not helpful. If you think you are experiencing a bug, please open a new issue with a complete reproduction. If you are unsure or have a question, ask on the [forum](https://discuss.elastic.co). Please note that what was reported here was on a pre-production snapshot of Elasticsearch 5.0.0 before a feature was added to handle situations like this more gracefully.", "created_at": "2017-01-15T03:47:56Z" }, { "body": "I tried reproducing using the above steps and I could not get the above log messages to appear. ", "created_at": "2017-01-15T14:47:54Z" }, { "body": "It happens often for us that we just restart the cluster and after that we have a lot of those error logs. No idea how to reproduce but it is definitely a bug - we just set cluster.routing.allocation.enable to none, shut down the nodes one by one. Do some linux patching or whatever. Then start the nodes.", "created_at": "2017-08-18T06:58:12Z" }, { "body": "still happening in Elasticsearch 6.0.0-beta2. \r\nI'm not sure why @abeyad closed this issue if it's still occurring on multiple releases?\r\n\r\nSteps to reproduce: \r\n* Set up an ES cluster\r\n* Create some indices and store some data\r\n* Restart the cluster\r\n* 💥 ", "created_at": "2017-09-06T18:56:55Z" }, { "body": "@agolomoodysaada, cause they don't give a shit. ES is a bitch to setup and manage.", "created_at": "2017-09-06T19:02:37Z" }, { "body": "@agolomoodysaada I think that you're misunderstanding the issue here (as are others claiming this issue still persists). Dangling indices are a thing that can happen, and we log about them, that's okay. The very specific issue here is the *repeated* logging for a single index, this is fixed as far as we know. The steps that you provided to reproduce are not sufficient to investigate this issue. If you think otherwise, please:\r\n - provide logging that shows this *specific* issue is still occurring\r\n - steps to reproduce that are more detailed then what every user does with Elasticsearch every day", "created_at": "2017-09-06T19:19:25Z" }, { "body": "@jasontedor \r\n\r\nBelow are some logs for ya representing this _very specific issue here_. \r\n\r\n```\r\n[2017-09-06T20:00:17,520][INFO ][o.e.c.s.MasterService ] [Dc_-1x_] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {Dc_-1x_}{Dc_-1x_0TlmiMLy_8mXVeQ}{agPujWkGRXWAJN1c3iT2_g}{10.0.0.4}{10.0.0.4:9300}\r\n[2017-09-06T20:00:17,529][INFO ][o.e.c.s.ClusterApplierService] [Dc_-1x_] new_master {Dc_-1x_}{Dc_-1x_0TlmiMLy_8mXVeQ}{agPujWkGRXWAJN1c3iT2_g}{10.0.0.4}{10.0.0.4:9300}, reason: apply cluster state (from master [master {Dc_-1x_}{Dc_-1x_0TlmiMLy_8mXVeQ}{agPujWkGRXWAJN1c3iT2_g}{10.0.0.4}{10.0.0.4:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])\r\n[2017-09-06T20:00:17,546][INFO ][o.e.h.n.Netty4HttpServerTransport] [Dc_-1x_] publish_address {10.0.0.4:9200}, bound_addresses {0.0.0.0:9200}\r\n[2017-09-06T20:00:17,546][INFO ][o.e.n.Node ] [Dc_-1x_] started\r\n[2017-09-06T20:00:21,570][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:21,600][INFO ][o.e.g.GatewayService ] [Dc_-1x_] recovered [8] indices into cluster_state\r\n[2017-09-06T20:00:21,807][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:22,297][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:22,525][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:22,784][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:22,954][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:23,089][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:23,191][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:23,365][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:23,473][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:23,509][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n[2017-09-06T20:00:23,526][WARN ][o.e.g.DanglingIndicesState] [Dc_-1x_] [[.kibana/_pL7GG3rRfK2RpigeqJ5tA]] can not be imported as a dangling index, as index with same name already exists in cluster metadata\r\n```\r\n\r\nIf dangling indices are _a thing that can happen, and we log about them, that's okay_, do you have any recommendations on preventing them from happening? How do we get Elasticsearch to start in this corrupted state?\r\n", "created_at": "2017-09-06T20:06:10Z" }, { "body": "I was able to debug my issues by calling these endpoints\r\n\r\n```\r\nGET _cluster/health\r\nGET _cluster/allocation/explain\r\n```\r\n\r\nThe cluster actually healed itself after waiting for some time. My problem is that I had healthchecks that were triggering auto-restarts of ES containers which prevented shards from being copied across the nodes in the cluster. Restarting the cluster now still triggers these errors but at least 5 mins later everything gets back to green.", "created_at": "2017-09-06T20:52:59Z" }, { "body": "It's not at all an indication of corruption. It often arises when users mess around with their data folders (for example, copying the data directory for an existing index). Basically, it's preventing you from importing an index with the same name twice. Would you please share:\r\n - the output of `/_cat/indices` against your cluster\r\n - the output of `tree /path/to/your/data/folder` for a node that holds shards for the `.kibana` index", "created_at": "2017-09-06T20:55:32Z" }, { "body": "/_cat/indices\r\n```\r\ngreen open logstash-2017.09.01 y4kO2RguRieheSUA2yUTgw 5 1 204886 0 1gb 527.8mb\r\ngreen open .monitoring-logstash-6-2017.09.07 YACFcRiJQLOvRHS5-EB2yg 1 1 407719 0 70.5mb 35.1mb\r\ngreen open logstash-2017.09.06 njw3jiVnTNCT2sVS5BwrJQ 5 1 178400584 0 63.5gb 31.7gb\r\ngreen open .kibana GuTek3p2RjmR8jGaTe6Hrw 1 1 11 2 109.7kb 54.8kb\r\ngreen open logstash-2017.09.04 819wJr7bT8GeP2Ezqmq4qQ 5 1 610296 0 755.4mb 377.7mb\r\ngreen open .monitoring-logstash-6-2017.09.06 vyJoAASjT8apt4J-WA2rSg 1 1 54030 0 9mb 4.4mb\r\ngreen open .monitoring-es-6-2017.09.07 R56iVv4FQT-ZmwGGGR8PyA 1 1 85377 350 127.7mb 63.8mb\r\ngreen open .monitoring-es-6-2017.09.06 BPCBp1G-ScmQIgMPjs6sXg 1 1 17072 112 23mb 11.5mb\r\ngreen open logstash-2017.09.02 F0Ycp-PESFWfV7J5p67c3g 5 1 6079617 0 5.4gb 2.7gb\r\ngreen open logstash-2017.09.03 LHo5QAgwSImYT-_WioKGkA 5 1 221204 0 247.8mb 123.9mb\r\ngreen open logstash-2017.09.07 6wASTQgvRtypYGI0JtSOYw 5 1 46941036 0 22.5gb 11.3gb\r\n```\r\ntree /path/to/your/data/folder\r\n```\r\n| |-- GuTek3p2RjmR8jGaTe6Hrw\r\n| | `-- _state\r\n| | `-- state-98.st\r\n```\r\n", "created_at": "2017-09-07T14:24:25Z" }, { "body": "Is that the entire `tree` output? Where are the other indices? Are there other nodes in the cluster? Have you already cleaned up the situation?", "created_at": "2017-09-07T14:26:39Z" }, { "body": "no I just pasted the part related to .kibana. Here is the full output\r\n\r\n<details>\r\n<pre>\r\n.\r\n|-- _state\r\n| |-- global-83.st\r\n| `-- node-78.st\r\n|-- indices\r\n| |-- 6wASTQgvRtypYGI0JtSOYw\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _6nc.dii\r\n| | | | |-- _6nc.dim\r\n| | | | |-- _6nc.fdt\r\n| | | | |-- _6nc.fdx\r\n| | | | |-- _6nc.fnm\r\n| | | | |-- _6nc.si\r\n| | | | |-- _6nc_Lucene50_0.doc\r\n| | | | |-- _6nc_Lucene50_0.pos\r\n| | | | |-- _6nc_Lucene50_0.tim\r\n| | | | |-- _6nc_Lucene50_0.tip\r\n| | | | |-- _6nc_Lucene70_0.dvd\r\n| | | | |-- _6nc_Lucene70_0.dvm\r\n| | | | |-- _98f.dii\r\n| | | | |-- _98f.dim\r\n| | | | |-- _98f.fdt\r\n| | | | |-- _98f.fdx\r\n| | | | |-- _98f.fnm\r\n| | | | |-- _98f.si\r\n| | | | |-- _98f_Lucene50_0.doc\r\n| | | | |-- _98f_Lucene50_0.pos\r\n| | | | |-- _98f_Lucene50_0.tim\r\n| | | | |-- _98f_Lucene50_0.tip\r\n| | | | |-- _98f_Lucene70_0.dvd\r\n| | | | |-- _98f_Lucene70_0.dvm\r\n| | | | |-- _ayz.dii\r\n| | | | |-- _ayz.dim\r\n| | | | |-- _ayz.fdt\r\n| | | | |-- _ayz.fdx\r\n| | | | |-- _ayz.fnm\r\n| | | | |-- _ayz.si\r\n| | | | |-- _ayz_Lucene50_0.doc\r\n| | | | |-- _ayz_Lucene50_0.pos\r\n| | | | |-- _ayz_Lucene50_0.tim\r\n| | | | |-- _ayz_Lucene50_0.tip\r\n| | | | |-- _ayz_Lucene70_0.dvd\r\n| | | | |-- _ayz_Lucene70_0.dvm\r\n| | | | |-- _bso.cfe\r\n| | | | |-- _bso.cfs\r\n| | | | |-- _bso.si\r\n| | | | |-- _dfn.cfe\r\n| | | | |-- _dfn.cfs\r\n| | | | |-- _dfn.si\r\n| | | | |-- _dm1.cfe\r\n| | | | |-- _dm1.cfs\r\n| | | | |-- _dm1.si\r\n| | | | |-- _dsq.cfe\r\n| | | | |-- _dsq.cfs\r\n| | | | |-- _dsq.si\r\n| | | | |-- _dtt.cfe\r\n| | | | |-- _dtt.cfs\r\n| | | | |-- _dtt.si\r\n| | | | |-- _duc.dii\r\n| | | | |-- _duc.dim\r\n| | | | |-- _duc.fdt\r\n| | | | |-- _duc.fdx\r\n| | | | |-- _duc.fnm\r\n| | | | |-- _duc.si\r\n| | | | |-- _duc_Lucene50_0.doc\r\n| | | | |-- _duc_Lucene50_0.pos\r\n| | | | |-- _duc_Lucene50_0.tim\r\n| | | | |-- _duc_Lucene50_0.tip\r\n| | | | |-- _duc_Lucene70_0.dvd\r\n| | | | |-- _duc_Lucene70_0.dvm\r\n| | | | |-- _e1l.cfe\r\n| | | | |-- _e1l.cfs\r\n| | | | |-- _e1l.si\r\n| | | | |-- _e8s.cfe\r\n| | | | |-- _e8s.cfs\r\n| | | | |-- _e8s.si\r\n| | | | |-- _eg3.cfe\r\n| | | | |-- _eg3.cfs\r\n| | | | |-- _eg3.si\r\n| | | | |-- _egl.cfe\r\n| | | | |-- _egl.cfs\r\n| | | | |-- _egl.si\r\n| | | | |-- _eiz.cfe\r\n| | | | |-- _eiz.cfs\r\n| | | | |-- _eiz.si\r\n| | | | |-- _ej4.cfe\r\n| | | | |-- _ej4.cfs\r\n| | | | |-- _ej4.si\r\n| | | | |-- _ejf.cfe\r\n| | | | |-- _ejf.cfs\r\n| | | | |-- _ejf.si\r\n| | | | |-- _ejh.cfe\r\n| | | | |-- _ejh.cfs\r\n| | | | |-- _ejh.si\r\n| | | | |-- _ejo.cfe\r\n| | | | |-- _ejo.cfs\r\n| | | | |-- _ejo.si\r\n| | | | |-- _ejx.cfe\r\n| | | | |-- _ejx.cfs\r\n| | | | |-- _ejx.si\r\n| | | | |-- _ejz.cfe\r\n| | | | |-- _ejz.cfs\r\n| | | | |-- _ejz.si\r\n| | | | |-- _ek9.cfe\r\n| | | | |-- _ek9.cfs\r\n| | | | |-- _ek9.si\r\n| | | | |-- _eka.cfe\r\n| | | | |-- _eka.cfs\r\n| | | | |-- _eka.si\r\n| | | | |-- _eki.cfe\r\n| | | | |-- _eki.cfs\r\n| | | | |-- _eki.si\r\n| | | | |-- _ekj.cfe\r\n| | | | |-- _ekj.cfs\r\n| | | | |-- _ekj.si\r\n| | | | |-- _ekk.cfe\r\n| | | | |-- _ekk.cfs\r\n| | | | |-- _ekk.si\r\n| | | | |-- _ekl.cfe\r\n| | | | |-- _ekl.cfs\r\n| | | | |-- _ekl.si\r\n| | | | |-- _ekm.cfe\r\n| | | | |-- _ekm.cfs\r\n| | | | |-- _ekm.si\r\n| | | | |-- _ekn.cfe\r\n| | | | |-- _ekn.cfs\r\n| | | | |-- _ekn.si\r\n| | | | |-- _eko.cfe\r\n| | | | |-- _eko.cfs\r\n| | | | |-- _eko.si\r\n| | | | |-- _epj.cfe\r\n| | | | |-- _epj.cfs\r\n| | | | |-- _epj.si\r\n| | | | |-- _ex9.cfe\r\n| | | | |-- _ex9.cfs\r\n| | | | |-- _ex9.si\r\n| | | | |-- _f2u.cfe\r\n| | | | |-- _f2u.cfs\r\n| | | | |-- _f2u.si\r\n| | | | |-- _f8n.cfe\r\n| | | | |-- _f8n.cfs\r\n| | | | |-- _f8n.si\r\n| | | | |-- _f98.cfe\r\n| | | | |-- _f98.cfs\r\n| | | | |-- _f98.si\r\n| | | | |-- _f9s.cfe\r\n| | | | |-- _f9s.cfs\r\n| | | | |-- _f9s.si\r\n| | | | |-- _fab.cfe\r\n| | | | |-- _fab.cfs\r\n| | | | |-- _fab.si\r\n| | | | |-- _fax.cfe\r\n| | | | |-- _fax.cfs\r\n| | | | |-- _fax.si\r\n| | | | |-- _fb7.cfe\r\n| | | | |-- _fb7.cfs\r\n| | | | |-- _fb7.si\r\n| | | | |-- _fbf.cfe\r\n| | | | |-- _fbf.cfs\r\n| | | | |-- _fbf.si\r\n| | | | |-- _fbg.cfe\r\n| | | | |-- _fbg.cfs\r\n| | | | |-- _fbg.si\r\n| | | | |-- _fbh.cfe\r\n| | | | |-- _fbh.cfs\r\n| | | | |-- _fbh.si\r\n| | | | |-- _fbi.cfe\r\n| | | | |-- _fbi.cfs\r\n| | | | |-- _fbi.si\r\n| | | | |-- _fbj.cfe\r\n| | | | |-- _fbj.cfs\r\n| | | | |-- _fbj.si\r\n| | | | |-- _fbk.cfe\r\n| | | | |-- _fbk.cfs\r\n| | | | |-- _fbk.si\r\n| | | | |-- _fbl.cfe\r\n| | | | |-- _fbl.cfs\r\n| | | | |-- _fbl.si\r\n| | | | |-- _fbm.fdt\r\n| | | | |-- _fbm.fdx\r\n| | | | |-- _fbn.fdt\r\n| | | | |-- _fbn.fdx\r\n| | | | |-- segments_e\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-100.ckp\r\n| | | |-- translog-100.tlog\r\n| | | |-- translog-101.ckp\r\n| | | |-- translog-101.tlog\r\n| | | |-- translog-102.tlog\r\n| | | |-- translog-94.ckp\r\n| | | |-- translog-94.tlog\r\n| | | |-- translog-95.ckp\r\n| | | |-- translog-95.tlog\r\n| | | |-- translog-96.ckp\r\n| | | |-- translog-96.tlog\r\n| | | |-- translog-97.ckp\r\n| | | |-- translog-97.tlog\r\n| | | |-- translog-98.ckp\r\n| | | |-- translog-98.tlog\r\n| | | |-- translog-99.ckp\r\n| | | |-- translog-99.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 2\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _a92.dii\r\n| | | | |-- _a92.dim\r\n| | | | |-- _a92.fdt\r\n| | | | |-- _a92.fdx\r\n| | | | |-- _a92.fnm\r\n| | | | |-- _a92.si\r\n| | | | |-- _a92_Lucene50_0.doc\r\n| | | | |-- _a92_Lucene50_0.pos\r\n| | | | |-- _a92_Lucene50_0.tim\r\n| | | | |-- _a92_Lucene50_0.tip\r\n| | | | |-- _a92_Lucene70_0.dvd\r\n| | | | |-- _a92_Lucene70_0.dvm\r\n| | | | |-- _bf2.dii\r\n| | | | |-- _bf2.dim\r\n| | | | |-- _bf2.fdt\r\n| | | | |-- _bf2.fdx\r\n| | | | |-- _bf2.fnm\r\n| | | | |-- _bf2.si\r\n| | | | |-- _bf2_Lucene50_0.doc\r\n| | | | |-- _bf2_Lucene50_0.pos\r\n| | | | |-- _bf2_Lucene50_0.tim\r\n| | | | |-- _bf2_Lucene50_0.tip\r\n| | | | |-- _bf2_Lucene70_0.dvd\r\n| | | | |-- _bf2_Lucene70_0.dvm\r\n| | | | |-- _dsc.dii\r\n| | | | |-- _dsc.dim\r\n| | | | |-- _dsc.fdt\r\n| | | | |-- _dsc.fdx\r\n| | | | |-- _dsc.fnm\r\n| | | | |-- _dsc.si\r\n| | | | |-- _dsc_Lucene50_0.doc\r\n| | | | |-- _dsc_Lucene50_0.pos\r\n| | | | |-- _dsc_Lucene50_0.tim\r\n| | | | |-- _dsc_Lucene50_0.tip\r\n| | | | |-- _dsc_Lucene70_0.dvd\r\n| | | | |-- _dsc_Lucene70_0.dvm\r\n| | | | |-- _f6n.cfe\r\n| | | | |-- _f6n.cfs\r\n| | | | |-- _f6n.si\r\n| | | | |-- _for.cfe\r\n| | | | |-- _for.cfs\r\n| | | | |-- _for.si\r\n| | | | |-- _fvg.cfe\r\n| | | | |-- _fvg.cfs\r\n| | | | |-- _fvg.si\r\n| | | | |-- _gh4.cfe\r\n| | | | |-- _gh4.cfs\r\n| | | | |-- _gh4.si\r\n| | | | |-- _gm2.dii\r\n| | | | |-- _gm2.dim\r\n| | | | |-- _gm2.fdt\r\n| | | | |-- _gm2.fdx\r\n| | | | |-- _gm2.fnm\r\n| | | | |-- _gm2.si\r\n| | | | |-- _gm2_Lucene50_0.doc\r\n| | | | |-- _gm2_Lucene50_0.pos\r\n| | | | |-- _gm2_Lucene50_0.tim\r\n| | | | |-- _gm2_Lucene50_0.tip\r\n| | | | |-- _gm2_Lucene70_0.dvd\r\n| | | | |-- _gm2_Lucene70_0.dvm\r\n| | | | |-- _gt2.cfe\r\n| | | | |-- _gt2.cfs\r\n| | | | |-- _gt2.si\r\n| | | | |-- _gy3.cfe\r\n| | | | |-- _gy3.cfs\r\n| | | | |-- _gy3.si\r\n| | | | |-- _h00.cfe\r\n| | | | |-- _h00.cfs\r\n| | | | |-- _h00.si\r\n| | | | |-- _h0x.cfe\r\n| | | | |-- _h0x.cfs\r\n| | | | |-- _h0x.si\r\n| | | | |-- _h29.cfe\r\n| | | | |-- _h29.cfs\r\n| | | | |-- _h29.si\r\n| | | | |-- _h2d.cfe\r\n| | | | |-- _h2d.cfs\r\n| | | | |-- _h2d.si\r\n| | | | |-- _h2h.cfe\r\n| | | | |-- _h2h.cfs\r\n| | | | |-- _h2h.si\r\n| | | | |-- _h2i.cfe\r\n| | | | |-- _h2i.cfs\r\n| | | | |-- _h2i.si\r\n| | | | |-- _h2t.cfe\r\n| | | | |-- _h2t.cfs\r\n| | | | |-- _h2t.si\r\n| | | | |-- _h32.cfe\r\n| | | | |-- _h32.cfs\r\n| | | | |-- _h32.si\r\n| | | | |-- _h3b.cfe\r\n| | | | |-- _h3b.cfs\r\n| | | | |-- _h3b.si\r\n| | | | |-- _h3d.cfe\r\n| | | | |-- _h3d.cfs\r\n| | | | |-- _h3d.si\r\n| | | | |-- _h3n.cfe\r\n| | | | |-- _h3n.cfs\r\n| | | | |-- _h3n.si\r\n| | | | |-- _h3q.cfe\r\n| | | | |-- _h3q.cfs\r\n| | | | |-- _h3q.si\r\n| | | | |-- _h3r.cfe\r\n| | | | |-- _h3r.cfs\r\n| | | | |-- _h3r.si\r\n| | | | |-- _h3x.cfe\r\n| | | | |-- _h3x.cfs\r\n| | | | |-- _h3x.si\r\n| | | | |-- _h3y.cfe\r\n| | | | |-- _h3y.cfs\r\n| | | | |-- _h3y.si\r\n| | | | |-- _h3z.cfe\r\n| | | | |-- _h3z.cfs\r\n| | | | |-- _h3z.si\r\n| | | | |-- _h40.cfe\r\n| | | | |-- _h40.cfs\r\n| | | | |-- _h40.si\r\n| | | | |-- _h96.cfe\r\n| | | | |-- _h96.cfs\r\n| | | | |-- _h96.si\r\n| | | | |-- _hbd.cfe\r\n| | | | |-- _hbd.cfs\r\n| | | | |-- _hbd.si\r\n| | | | |-- _hic.cfe\r\n| | | | |-- _hic.cfs\r\n| | | | |-- _hic.si\r\n| | | | |-- _hly.cfe\r\n| | | | |-- _hly.cfs\r\n| | | | |-- _hly.si\r\n| | | | |-- _hq3.cfe\r\n| | | | |-- _hq3.cfs\r\n| | | | |-- _hq3.si\r\n| | | | |-- _hr8.cfe\r\n| | | | |-- _hr8.cfs\r\n| | | | |-- _hr8.si\r\n| | | | |-- _hrj.cfe\r\n| | | | |-- _hrj.cfs\r\n| | | | |-- _hrj.si\r\n| | | | |-- _hrs.cfe\r\n| | | | |-- _hrs.cfs\r\n| | | | |-- _hrs.si\r\n| | | | |-- _hs2.cfe\r\n| | | | |-- _hs2.cfs\r\n| | | | |-- _hs2.si\r\n| | | | |-- _hth.cfe\r\n| | | | |-- _hth.cfs\r\n| | | | |-- _hth.si\r\n| | | | |-- _htp.cfe\r\n| | | | |-- _htp.cfs\r\n| | | | |-- _htp.si\r\n| | | | |-- _hub.cfe\r\n| | | | |-- _hub.cfs\r\n| | | | |-- _hub.si\r\n| | | | |-- _huu.cfe\r\n| | | | |-- _huu.cfs\r\n| | | | |-- _huu.si\r\n| | | | |-- _huv.cfe\r\n| | | | |-- _huv.cfs\r\n| | | | |-- _huv.si\r\n| | | | |-- _huw.cfe\r\n| | | | |-- _huw.cfs\r\n| | | | |-- _huw.si\r\n| | | | |-- _hux.cfe\r\n| | | | |-- _hux.cfs\r\n| | | | |-- _hux.si\r\n| | | | |-- _huy.cfe\r\n| | | | |-- _huy.cfs\r\n| | | | |-- _huy.si\r\n| | | | |-- _huz.cfe\r\n| | | | |-- _huz.cfs\r\n| | | | |-- _huz.si\r\n| | | | |-- _hv0.fdt\r\n| | | | |-- _hv0.fdx\r\n| | | | |-- _hv1.fdt\r\n| | | | |-- _hv1.fdx\r\n| | | | |-- segments_d\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-100.ckp\r\n| | | |-- translog-100.tlog\r\n| | | |-- translog-101.ckp\r\n| | | |-- translog-101.tlog\r\n| | | |-- translog-102.tlog\r\n| | | |-- translog-93.ckp\r\n| | | |-- translog-93.tlog\r\n| | | |-- translog-94.ckp\r\n| | | |-- translog-94.tlog\r\n| | | |-- translog-95.ckp\r\n| | | |-- translog-95.tlog\r\n| | | |-- translog-96.ckp\r\n| | | |-- translog-96.tlog\r\n| | | |-- translog-97.ckp\r\n| | | |-- translog-97.tlog\r\n| | | |-- translog-98.ckp\r\n| | | |-- translog-98.tlog\r\n| | | |-- translog-99.ckp\r\n| | | |-- translog-99.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 3\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _8bx.dii\r\n| | | | |-- _8bx.dim\r\n| | | | |-- _8bx.fdt\r\n| | | | |-- _8bx.fdx\r\n| | | | |-- _8bx.fnm\r\n| | | | |-- _8bx.si\r\n| | | | |-- _8bx_Lucene50_0.doc\r\n| | | | |-- _8bx_Lucene50_0.pos\r\n| | | | |-- _8bx_Lucene50_0.tim\r\n| | | | |-- _8bx_Lucene50_0.tip\r\n| | | | |-- _8bx_Lucene70_0.dvd\r\n| | | | |-- _8bx_Lucene70_0.dvm\r\n| | | | |-- _a1l.cfe\r\n| | | | |-- _a1l.cfs\r\n| | | | |-- _a1l.si\r\n| | | | |-- _cvv.cfe\r\n| | | | |-- _cvv.cfs\r\n| | | | |-- _cvv.si\r\n| | | | |-- _d1r.cfe\r\n| | | | |-- _d1r.cfs\r\n| | | | |-- _d1r.si\r\n| | | | |-- _dgh.dii\r\n| | | | |-- _dgh.dim\r\n| | | | |-- _dgh.fdt\r\n| | | | |-- _dgh.fdx\r\n| | | | |-- _dgh.fnm\r\n| | | | |-- _dgh.si\r\n| | | | |-- _dgh_Lucene50_0.doc\r\n| | | | |-- _dgh_Lucene50_0.pos\r\n| | | | |-- _dgh_Lucene50_0.tim\r\n| | | | |-- _dgh_Lucene50_0.tip\r\n| | | | |-- _dgh_Lucene70_0.dvd\r\n| | | | |-- _dgh_Lucene70_0.dvm\r\n| | | | |-- _dne.cfe\r\n| | | | |-- _dne.cfs\r\n| | | | |-- _dne.si\r\n| | | | |-- _dya.cfe\r\n| | | | |-- _dya.cfs\r\n| | | | |-- _dya.si\r\n| | | | |-- _e6m.cfe\r\n| | | | |-- _e6m.cfs\r\n| | | | |-- _e6m.si\r\n| | | | |-- _edj.cfe\r\n| | | | |-- _edj.cfs\r\n| | | | |-- _edj.si\r\n| | | | |-- _ek8.cfe\r\n| | | | |-- _ek8.cfs\r\n| | | | |-- _ek8.si\r\n| | | | |-- _elw.cfe\r\n| | | | |-- _elw.cfs\r\n| | | | |-- _elw.si\r\n| | | | |-- _eo4.cfe\r\n| | | | |-- _eo4.cfs\r\n| | | | |-- _eo4.si\r\n| | | | |-- _eqm.cfe\r\n| | | | |-- _eqm.cfs\r\n| | | | |-- _eqm.si\r\n| | | | |-- _esu.cfe\r\n| | | | |-- _esu.cfs\r\n| | | | |-- _esu.si\r\n| | | | |-- _et4.cfe\r\n| | | | |-- _et4.cfs\r\n| | | | |-- _et4.si\r\n| | | | |-- _etp.cfe\r\n| | | | |-- _etp.cfs\r\n| | | | |-- _etp.si\r\n| | | | |-- _eu8.cfe\r\n| | | | |-- _eu8.cfs\r\n| | | | |-- _eu8.si\r\n| | | | |-- _eus.cfe\r\n| | | | |-- _eus.cfs\r\n| | | | |-- _eus.si\r\n| | | | |-- _evd.cfe\r\n| | | | |-- _evd.cfs\r\n| | | | |-- _evd.si\r\n| | | | |-- _evm.cfe\r\n| | | | |-- _evm.cfs\r\n| | | | |-- _evm.si\r\n| | | | |-- _evw.cfe\r\n| | | | |-- _evw.cfs\r\n| | | | |-- _evw.si\r\n| | | | |-- _ewf.cfe\r\n| | | | |-- _ewf.cfs\r\n| | | | |-- _ewf.si\r\n| | | | |-- _ewg.cfe\r\n| | | | |-- _ewg.cfs\r\n| | | | |-- _ewg.si\r\n| | | | |-- _ewh.cfe\r\n| | | | |-- _ewh.cfs\r\n| | | | |-- _ewh.si\r\n| | | | |-- _ewi.cfe\r\n| | | | |-- _ewi.cfs\r\n| | | | |-- _ewi.si\r\n| | | | |-- _ewj.cfe\r\n| | | | |-- _ewj.cfs\r\n| | | | |-- _ewj.si\r\n| | | | |-- _ewk.cfe\r\n| | | | |-- _ewk.cfs\r\n| | | | |-- _ewk.si\r\n| | | | |-- _ewl.cfe\r\n| | | | |-- _ewl.cfs\r\n| | | | |-- _ewl.si\r\n| | | | |-- _ewm.cfe\r\n| | | | |-- _ewm.cfs\r\n| | | | |-- _ewm.si\r\n| | | | |-- _ewn.cfe\r\n| | | | |-- _ewn.cfs\r\n| | | | |-- _ewn.si\r\n| | | | |-- _ewo.cfe\r\n| | | | |-- _ewo.cfs\r\n| | | | |-- _ewo.si\r\n| | | | |-- _ewp.cfe\r\n| | | | |-- _ewp.cfs\r\n| | | | |-- _ewp.si\r\n| | | | |-- _eyd.cfe\r\n| | | | |-- _eyd.cfs\r\n| | | | |-- _eyd.si\r\n| | | | |-- _f0b.cfe\r\n| | | | |-- _f0b.cfs\r\n| | | | |-- _f0b.si\r\n| | | | |-- _f0w.cfe\r\n| | | | |-- _f0w.cfs\r\n| | | | |-- _f0w.si\r\n| | | | |-- _f1f.cfe\r\n| | | | |-- _f1f.cfs\r\n| | | | |-- _f1f.si\r\n| | | | |-- _f1r.cfe\r\n| | | | |-- _f1r.cfs\r\n| | | | |-- _f1r.si\r\n| | | | |-- _f2b.cfe\r\n| | | | |-- _f2b.cfs\r\n| | | | |-- _f2b.si\r\n| | | | |-- _f2u.cfe\r\n| | | | |-- _f2u.cfs\r\n| | | | |-- _f2u.si\r\n| | | | |-- _f3d.cfe\r\n| | | | |-- _f3d.cfs\r\n| | | | |-- _f3d.si\r\n| | | | |-- _f3y.cfe\r\n| | | | |-- _f3y.cfs\r\n| | | | |-- _f3y.si\r\n| | | | |-- _f4h.cfe\r\n| | | | |-- _f4h.cfs\r\n| | | | |-- _f4h.si\r\n| | | | |-- _f4i.cfe\r\n| | | | |-- _f4i.cfs\r\n| | | | |-- _f4i.si\r\n| | | | |-- _f4j.cfe\r\n| | | | |-- _f4j.cfs\r\n| | | | |-- _f4j.si\r\n| | | | |-- _f4k.cfe\r\n| | | | |-- _f4k.cfs\r\n| | | | |-- _f4k.si\r\n| | | | |-- _f4l.cfe\r\n| | | | |-- _f4l.cfs\r\n| | | | |-- _f4l.si\r\n| | | | |-- _f4m.cfe\r\n| | | | |-- _f4m.cfs\r\n| | | | |-- _f4m.si\r\n| | | | |-- _f4n.cfe\r\n| | | | |-- _f4n.cfs\r\n| | | | |-- _f4n.si\r\n| | | | |-- _f4o.cfe\r\n| | | | |-- _f4o.cfs\r\n| | | | |-- _f4o.si\r\n| | | | |-- _f4p.fdt\r\n| | | | |-- _f4p.fdx\r\n| | | | |-- _f4q.fdt\r\n| | | | |-- _f4q.fdx\r\n| | | | |-- segments_f\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-100.ckp\r\n| | | |-- translog-100.tlog\r\n| | | |-- translog-101.ckp\r\n| | | |-- translog-101.tlog\r\n| | | |-- translog-102.ckp\r\n| | | |-- translog-102.tlog\r\n| | | |-- translog-103.ckp\r\n| | | |-- translog-103.tlog\r\n| | | |-- translog-104.tlog\r\n| | | |-- translog-95.ckp\r\n| | | |-- translog-95.tlog\r\n| | | |-- translog-96.ckp\r\n| | | |-- translog-96.tlog\r\n| | | |-- translog-97.ckp\r\n| | | |-- translog-97.tlog\r\n| | | |-- translog-98.ckp\r\n| | | |-- translog-98.tlog\r\n| | | |-- translog-99.ckp\r\n| | | |-- translog-99.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-8.st\r\n| |-- 819wJr7bT8GeP2Ezqmq4qQ\r\n| | |-- 1\r\n| | | |-- _state\r\n| | | | `-- state-66.st\r\n| | | |-- index\r\n| | | | |-- _9qn.dii\r\n| | | | |-- _9qn.dim\r\n| | | | |-- _9qn.fdt\r\n| | | | |-- _9qn.fdx\r\n| | | | |-- _9qn.fnm\r\n| | | | |-- _9qn.si\r\n| | | | |-- _9qn_Lucene50_0.doc\r\n| | | | |-- _9qn_Lucene50_0.pos\r\n| | | | |-- _9qn_Lucene50_0.tim\r\n| | | | |-- _9qn_Lucene50_0.tip\r\n| | | | |-- _9qn_Lucene70_0.dvd\r\n| | | | |-- _9qn_Lucene70_0.dvm\r\n| | | | |-- _bi0.dii\r\n| | | | |-- _bi0.dim\r\n| | | | |-- _bi0.fdt\r\n| | | | |-- _bi0.fdx\r\n| | | | |-- _bi0.fnm\r\n| | | | |-- _bi0.si\r\n| | | | |-- _bi0_Lucene50_0.doc\r\n| | | | |-- _bi0_Lucene50_0.pos\r\n| | | | |-- _bi0_Lucene50_0.tim\r\n| | | | |-- _bi0_Lucene50_0.tip\r\n| | | | |-- _bi0_Lucene70_0.dvd\r\n| | | | |-- _bi0_Lucene70_0.dvm\r\n| | | | |-- _cos.cfe\r\n| | | | |-- _cos.cfs\r\n| | | | |-- _cos.si\r\n| | | | |-- _d0g.cfe\r\n| | | | |-- _d0g.cfs\r\n| | | | |-- _d0g.si\r\n| | | | |-- _d1a.cfe\r\n| | | | |-- _d1a.cfs\r\n| | | | |-- _d1a.si\r\n| | | | |-- segments_6\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 3\r\n| | | |-- _state\r\n| | | | `-- state-66.st\r\n| | | |-- index\r\n| | | | |-- _9jf.dii\r\n| | | | |-- _9jf.dim\r\n| | | | |-- _9jf.fdt\r\n| | | | |-- _9jf.fdx\r\n| | | | |-- _9jf.fnm\r\n| | | | |-- _9jf.si\r\n| | | | |-- _9jf_Lucene50_0.doc\r\n| | | | |-- _9jf_Lucene50_0.pos\r\n| | | | |-- _9jf_Lucene50_0.tim\r\n| | | | |-- _9jf_Lucene50_0.tip\r\n| | | | |-- _9jf_Lucene70_0.dvd\r\n| | | | |-- _9jf_Lucene70_0.dvm\r\n| | | | |-- _bdk.dii\r\n| | | | |-- _bdk.dim\r\n| | | | |-- _bdk.fdt\r\n| | | | |-- _bdk.fdx\r\n| | | | |-- _bdk.fnm\r\n| | | | |-- _bdk.si\r\n| | | | |-- _bdk_Lucene50_0.doc\r\n| | | | |-- _bdk_Lucene50_0.pos\r\n| | | | |-- _bdk_Lucene50_0.tim\r\n| | | | |-- _bdk_Lucene50_0.tip\r\n| | | | |-- _bdk_Lucene70_0.dvd\r\n| | | | |-- _bdk_Lucene70_0.dvm\r\n| | | | |-- _cf2.dii\r\n| | | | |-- _cf2.dim\r\n| | | | |-- _cf2.fdt\r\n| | | | |-- _cf2.fdx\r\n| | | | |-- _cf2.fnm\r\n| | | | |-- _cf2.si\r\n| | | | |-- _cf2_Lucene50_0.doc\r\n| | | | |-- _cf2_Lucene50_0.pos\r\n| | | | |-- _cf2_Lucene50_0.tim\r\n| | | | |-- _cf2_Lucene50_0.tip\r\n| | | | |-- _cf2_Lucene70_0.dvd\r\n| | | | |-- _cf2_Lucene70_0.dvm\r\n| | | | |-- _d10.cfe\r\n| | | | |-- _d10.cfs\r\n| | | | |-- _d10.si\r\n| | | | |-- _d1k.cfe\r\n| | | | |-- _d1k.cfs\r\n| | | | |-- _d1k.si\r\n| | | | |-- _d1l.cfe\r\n| | | | |-- _d1l.cfs\r\n| | | | |-- _d1l.si\r\n| | | | |-- _d1m.cfe\r\n| | | | |-- _d1m.cfs\r\n| | | | |-- _d1m.si\r\n| | | | |-- _d1n.cfe\r\n| | | | |-- _d1n.cfs\r\n| | | | |-- _d1n.si\r\n| | | | |-- segments_9\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 4\r\n| | | |-- _state\r\n| | | | `-- state-66.st\r\n| | | |-- index\r\n| | | | |-- _9lx.dii\r\n| | | | |-- _9lx.dim\r\n| | | | |-- _9lx.fdt\r\n| | | | |-- _9lx.fdx\r\n| | | | |-- _9lx.fnm\r\n| | | | |-- _9lx.si\r\n| | | | |-- _9lx_Lucene50_0.doc\r\n| | | | |-- _9lx_Lucene50_0.pos\r\n| | | | |-- _9lx_Lucene50_0.tim\r\n| | | | |-- _9lx_Lucene50_0.tip\r\n| | | | |-- _9lx_Lucene70_0.dvd\r\n| | | | |-- _9lx_Lucene70_0.dvm\r\n| | | | |-- _bgw.dii\r\n| | | | |-- _bgw.dim\r\n| | | | |-- _bgw.fdt\r\n| | | | |-- _bgw.fdx\r\n| | | | |-- _bgw.fnm\r\n| | | | |-- _bgw.si\r\n| | | | |-- _bgw_Lucene50_0.doc\r\n| | | | |-- _bgw_Lucene50_0.pos\r\n| | | | |-- _bgw_Lucene50_0.tim\r\n| | | | |-- _bgw_Lucene50_0.tip\r\n| | | | |-- _bgw_Lucene70_0.dvd\r\n| | | | |-- _bgw_Lucene70_0.dvm\r\n| | | | |-- _cmu.dii\r\n| | | | |-- _cmu.dim\r\n| | | | |-- _cmu.fdt\r\n| | | | |-- _cmu.fdx\r\n| | | | |-- _cmu.fnm\r\n| | | | |-- _cmu.si\r\n| | | | |-- _cmu_Lucene50_0.doc\r\n| | | | |-- _cmu_Lucene50_0.pos\r\n| | | | |-- _cmu_Lucene50_0.tim\r\n| | | | |-- _cmu_Lucene50_0.tip\r\n| | | | |-- _cmu_Lucene70_0.dvd\r\n| | | | |-- _cmu_Lucene70_0.dvm\r\n| | | | |-- _d10.cfe\r\n| | | | |-- _d10.cfs\r\n| | | | |-- _d10.si\r\n| | | | |-- _d1u.cfe\r\n| | | | |-- _d1u.cfs\r\n| | | | |-- _d1u.si\r\n| | | | |-- _d1v.cfe\r\n| | | | |-- _d1v.cfs\r\n| | | | |-- _d1v.si\r\n| | | | |-- segments_7\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-276.st\r\n| |-- BPCBp1G-ScmQIgMPjs6sXg\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-63.st\r\n| | | |-- index\r\n| | | | |-- _2ye.dii\r\n| | | | |-- _2ye.dim\r\n| | | | |-- _2ye.fdt\r\n| | | | |-- _2ye.fdx\r\n| | | | |-- _2ye.fnm\r\n| | | | |-- _2ye.si\r\n| | | | |-- _2ye_1.liv\r\n| | | | |-- _2ye_Lucene50_0.doc\r\n| | | | |-- _2ye_Lucene50_0.tim\r\n| | | | |-- _2ye_Lucene50_0.tip\r\n| | | | |-- _2ye_Lucene70_0.dvd\r\n| | | | |-- _2ye_Lucene70_0.dvm\r\n| | | | |-- _2yf.cfe\r\n| | | | |-- _2yf.cfs\r\n| | | | |-- _2yf.si\r\n| | | | |-- _2yf_1.liv\r\n| | | | |-- _2yg.cfe\r\n| | | | |-- _2yg.cfs\r\n| | | | |-- _2yg.si\r\n| | | | |-- _2yh.cfe\r\n| | | | |-- _2yh.cfs\r\n| | | | |-- _2yh.si\r\n| | | | |-- _2yi.cfe\r\n| | | | |-- _2yi.cfs\r\n| | | | |-- _2yi.si\r\n| | | | |-- _2yj.cfe\r\n| | | | |-- _2yj.cfs\r\n| | | | |-- _2yj.si\r\n| | | | |-- segments_8\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-70.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-193.st\r\n| |-- F0Ycp-PESFWfV7J5p67c3g\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-65.st\r\n| | | |-- index\r\n| | | | |-- _18s.dii\r\n| | | | |-- _18s.dim\r\n| | | | |-- _18s.fdt\r\n| | | | |-- _18s.fdx\r\n| | | | |-- _18s.fnm\r\n| | | | |-- _18s.si\r\n| | | | |-- _18s_Lucene50_0.doc\r\n| | | | |-- _18s_Lucene50_0.pos\r\n| | | | |-- _18s_Lucene50_0.tim\r\n| | | | |-- _18s_Lucene50_0.tip\r\n| | | | |-- _18s_Lucene70_0.dvd\r\n| | | | |-- _18s_Lucene70_0.dvm\r\n| | | | |-- _39v.cfe\r\n| | | | |-- _39v.cfs\r\n| | | | |-- _39v.si\r\n| | | | |-- _3a5.dii\r\n| | | | |-- _3a5.dim\r\n| | | | |-- _3a5.fdt\r\n| | | | |-- _3a5.fdx\r\n| | | | |-- _3a5.fnm\r\n| | | | |-- _3a5.si\r\n| | | | |-- _3a5_Lucene50_0.doc\r\n| | | | |-- _3a5_Lucene50_0.pos\r\n| | | | |-- _3a5_Lucene50_0.tim\r\n| | | | |-- _3a5_Lucene50_0.tip\r\n| | | | |-- _3a5_Lucene70_0.dvd\r\n| | | | |-- _3a5_Lucene70_0.dvm\r\n| | | | |-- _3wd.cfe\r\n| | | | |-- _3wd.cfs\r\n| | | | |-- _3wd.si\r\n| | | | |-- _4tz.cfe\r\n| | | | |-- _4tz.cfs\r\n| | | | |-- _4tz.si\r\n| | | | |-- _7dd.cfe\r\n| | | | |-- _7dd.cfs\r\n| | | | |-- _7dd.si\r\n| | | | |-- _8f.dii\r\n| | | | |-- _8f.dim\r\n| | | | |-- _8f.fdt\r\n| | | | |-- _8f.fdx\r\n| | | | |-- _8f.fnm\r\n| | | | |-- _8f.si\r\n| | | | |-- _8f_Lucene50_0.doc\r\n| | | | |-- _8f_Lucene50_0.pos\r\n| | | | |-- _8f_Lucene50_0.tim\r\n| | | | |-- _8f_Lucene50_0.tip\r\n| | | | |-- _8f_Lucene70_0.dvd\r\n| | | | |-- _8f_Lucene70_0.dvm\r\n| | | | |-- _9q3.cfe\r\n| | | | |-- _9q3.cfs\r\n| | | | |-- _9q3.si\r\n| | | | |-- _9s1.cfe\r\n| | | | |-- _9s1.cfs\r\n| | | | |-- _9s1.si\r\n| | | | |-- _bdt.cfe\r\n| | | | |-- _bdt.cfs\r\n| | | | |-- _bdt.si\r\n| | | | |-- _c6z.cfe\r\n| | | | |-- _c6z.cfs\r\n| | | | |-- _c6z.si\r\n| | | | |-- _cxx.cfe\r\n| | | | |-- _cxx.cfs\r\n| | | | |-- _cxx.si\r\n| | | | |-- _cy7.cfe\r\n| | | | |-- _cy7.cfs\r\n| | | | |-- _cy7.si\r\n| | | | |-- _cy8.cfe\r\n| | | | |-- _cy8.cfs\r\n| | | | |-- _cy8.si\r\n| | | | |-- _cy9.cfe\r\n| | | | |-- _cy9.cfs\r\n| | | | |-- _cy9.si\r\n| | | | |-- _cya.cfe\r\n| | | | |-- _cya.cfs\r\n| | | | |-- _cya.si\r\n| | | | |-- _cyb.cfe\r\n| | | | |-- _cyb.cfs\r\n| | | | |-- _cyb.si\r\n| | | | |-- _cyc.cfe\r\n| | | | |-- _cyc.cfs\r\n| | | | |-- _cyc.si\r\n| | | | |-- _cyd.cfe\r\n| | | | |-- _cyd.cfs\r\n| | | | |-- _cyd.si\r\n| | | | |-- _cye.cfe\r\n| | | | |-- _cye.cfs\r\n| | | | |-- _cye.si\r\n| | | | |-- _qw.dii\r\n| | | | |-- _qw.dim\r\n| | | | |-- _qw.fdt\r\n| | | | |-- _qw.fdx\r\n| | | | |-- _qw.fnm\r\n| | | | |-- _qw.si\r\n| | | | |-- _qw_Lucene50_0.doc\r\n| | | | |-- _qw_Lucene50_0.pos\r\n| | | | |-- _qw_Lucene50_0.tim\r\n| | | | |-- _qw_Lucene50_0.tip\r\n| | | | |-- _qw_Lucene70_0.dvd\r\n| | | | |-- _qw_Lucene70_0.dvm\r\n| | | | |-- segments_c\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-65.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 1\r\n| | | |-- _state\r\n| | | | `-- state-65.st\r\n| | | |-- index\r\n| | | | |-- _1a6.dii\r\n| | | | |-- _1a6.dim\r\n| | | | |-- _1a6.fdt\r\n| | | | |-- _1a6.fdx\r\n| | | | |-- _1a6.fnm\r\n| | | | |-- _1a6.si\r\n| | | | |-- _1a6_Lucene50_0.doc\r\n| | | | |-- _1a6_Lucene50_0.pos\r\n| | | | |-- _1a6_Lucene50_0.tim\r\n| | | | |-- _1a6_Lucene50_0.tip\r\n| | | | |-- _1a6_Lucene70_0.dvd\r\n| | | | |-- _1a6_Lucene70_0.dvm\r\n| | | | |-- _2lp.dii\r\n| | | | |-- _2lp.dim\r\n| | | | |-- _2lp.fdt\r\n| | | | |-- _2lp.fdx\r\n| | | | |-- _2lp.fnm\r\n| | | | |-- _2lp.si\r\n| | | | |-- _2lp_Lucene50_0.doc\r\n| | | | |-- _2lp_Lucene50_0.pos\r\n| | | | |-- _2lp_Lucene50_0.tim\r\n| | | | |-- _2lp_Lucene50_0.tip\r\n| | | | |-- _2lp_Lucene70_0.dvd\r\n| | | | |-- _2lp_Lucene70_0.dvm\r\n| | | | |-- _30z.cfe\r\n| | | | |-- _30z.cfs\r\n| | | | |-- _30z.si\r\n| | | | |-- _77t.cfe\r\n| | | | |-- _77t.cfs\r\n| | | | |-- _77t.si\r\n| | | | |-- _8f.dii\r\n| | | | |-- _8f.dim\r\n| | | | |-- _8f.fdt\r\n| | | | |-- _8f.fdx\r\n| | | | |-- _8f.fnm\r\n| | | | |-- _8f.si\r\n| | | | |-- _8f_Lucene50_0.doc\r\n| | | | |-- _8f_Lucene50_0.pos\r\n| | | | |-- _8f_Lucene50_0.tim\r\n| | | | |-- _8f_Lucene50_0.tip\r\n| | | | |-- _8f_Lucene70_0.dvd\r\n| | | | |-- _8f_Lucene70_0.dvm\r\n| | | | |-- _a53.dii\r\n| | | | |-- _a53.dim\r\n| | | | |-- _a53.fdt\r\n| | | | |-- _a53.fdx\r\n| | | | |-- _a53.fnm\r\n| | | | |-- _a53.si\r\n| | | | |-- _a53_Lucene50_0.doc\r\n| | | | |-- _a53_Lucene50_0.pos\r\n| | | | |-- _a53_Lucene50_0.tim\r\n| | | | |-- _a53_Lucene50_0.tip\r\n| | | | |-- _a53_Lucene70_0.dvd\r\n| | | | |-- _a53_Lucene70_0.dvm\r\n| | | | |-- _cu1.cfe\r\n| | | | |-- _cu1.cfs\r\n| | | | |-- _cu1.si\r\n| | | | |-- _cwj.cfe\r\n| | | | |-- _cwj.cfs\r\n| | | | |-- _cwj.si\r\n| | | | |-- _cwt.cfe\r\n| | | | |-- _cwt.cfs\r\n| | | | |-- _cwt.si\r\n| | | | |-- _cxd.cfe\r\n| | | | |-- _cxd.cfs\r\n| | | | |-- _cxd.si\r\n| | | | |-- _cxn.cfe\r\n| | | | |-- _cxn.cfs\r\n| | | | |-- _cxn.si\r\n| | | | |-- _cxx.cfe\r\n| | | | |-- _cxx.cfs\r\n| | | | |-- _cxx.si\r\n| | | | |-- _cy7.cfe\r\n| | | | |-- _cy7.cfs\r\n| | | | |-- _cy7.si\r\n| | | | |-- _cy8.cfe\r\n| | | | |-- _cy8.cfs\r\n| | | | |-- _cy8.si\r\n| | | | |-- _cy9.cfe\r\n| | | | |-- _cy9.cfs\r\n| | | | |-- _cy9.si\r\n| | | | |-- _cya.cfe\r\n| | | | |-- _cya.cfs\r\n| | | | |-- _cya.si\r\n| | | | |-- _cyb.cfe\r\n| | | | |-- _cyb.cfs\r\n| | | | |-- _cyb.si\r\n| | | | |-- _cyc.cfe\r\n| | | | |-- _cyc.cfs\r\n| | | | |-- _cyc.si\r\n| | | | |-- _cyd.cfe\r\n| | | | |-- _cyd.cfs\r\n| | | | |-- _cyd.si\r\n| | | | |-- _cye.cfe\r\n| | | | |-- _cye.cfs\r\n| | | | |-- _cye.si\r\n| | | | |-- _cyf.cfe\r\n| | | | |-- _cyf.cfs\r\n| | | | |-- _cyf.si\r\n| | | | |-- _mz.dii\r\n| | | | |-- _mz.dim\r\n| | | | |-- _mz.fdt\r\n| | | | |-- _mz.fdx\r\n| | | | |-- _mz.fnm\r\n| | | | |-- _mz.si\r\n| | | | |-- _mz_Lucene50_0.doc\r\n| | | | |-- _mz_Lucene50_0.pos\r\n| | | | |-- _mz_Lucene50_0.tim\r\n| | | | |-- _mz_Lucene50_0.tip\r\n| | | | |-- _mz_Lucene70_0.dvd\r\n| | | | |-- _mz_Lucene70_0.dvm\r\n| | | | |-- segments_a\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-65.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 4\r\n| | | |-- _state\r\n| | | | `-- state-66.st\r\n| | | |-- index\r\n| | | | |-- _1om.dii\r\n| | | | |-- _1om.dim\r\n| | | | |-- _1om.fdt\r\n| | | | |-- _1om.fdx\r\n| | | | |-- _1om.fnm\r\n| | | | |-- _1om.si\r\n| | | | |-- _1om_Lucene50_0.doc\r\n| | | | |-- _1om_Lucene50_0.pos\r\n| | | | |-- _1om_Lucene50_0.tim\r\n| | | | |-- _1om_Lucene50_0.tip\r\n| | | | |-- _1om_Lucene70_0.dvd\r\n| | | | |-- _1om_Lucene70_0.dvm\r\n| | | | |-- _1x8.cfe\r\n| | | | |-- _1x8.cfs\r\n| | | | |-- _1x8.si\r\n| | | | |-- _2gp.dii\r\n| | | | |-- _2gp.dim\r\n| | | | |-- _2gp.fdt\r\n| | | | |-- _2gp.fdx\r\n| | | | |-- _2gp.fnm\r\n| | | | |-- _2gp.si\r\n| | | | |-- _2gp_Lucene50_0.doc\r\n| | | | |-- _2gp_Lucene50_0.pos\r\n| | | | |-- _2gp_Lucene50_0.tim\r\n| | | | |-- _2gp_Lucene50_0.tip\r\n| | | | |-- _2gp_Lucene70_0.dvd\r\n| | | | |-- _2gp_Lucene70_0.dvm\r\n| | | | |-- _3l9.dii\r\n| | | | |-- _3l9.dim\r\n| | | | |-- _3l9.fdt\r\n| | | | |-- _3l9.fdx\r\n| | | | |-- _3l9.fnm\r\n| | | | |-- _3l9.si\r\n| | | | |-- _3l9_Lucene50_0.doc\r\n| | | | |-- _3l9_Lucene50_0.pos\r\n| | | | |-- _3l9_Lucene50_0.tim\r\n| | | | |-- _3l9_Lucene50_0.tip\r\n| | | | |-- _3l9_Lucene70_0.dvd\r\n| | | | |-- _3l9_Lucene70_0.dvm\r\n| | | | |-- _4ln.cfe\r\n| | | | |-- _4ln.cfs\r\n| | | | |-- _4ln.si\r\n| | | | |-- _6y3.cfe\r\n| | | | |-- _6y3.cfs\r\n| | | | |-- _6y3.si\r\n| | | | |-- _86.dii\r\n| | | | |-- _86.dim\r\n| | | | |-- _86.fdt\r\n| | | | |-- _86.fdx\r\n| | | | |-- _86.fnm\r\n| | | | |-- _86.si\r\n| | | | |-- _86_Lucene50_0.doc\r\n| | | | |-- _86_Lucene50_0.pos\r\n| | | | |-- _86_Lucene50_0.tim\r\n| | | | |-- _86_Lucene50_0.tip\r\n| | | | |-- _86_Lucene70_0.dvd\r\n| | | | |-- _86_Lucene70_0.dvm\r\n| | | | |-- _90j.cfe\r\n| | | | |-- _90j.cfs\r\n| | | | |-- _90j.si\r\n| | | | |-- _aw1.cfe\r\n| | | | |-- _aw1.cfs\r\n| | | | |-- _aw1.si\r\n| | | | |-- _bwp.cfe\r\n| | | | |-- _bwp.cfs\r\n| | | | |-- _bwp.si\r\n| | | | |-- _csd.cfe\r\n| | | | |-- _csd.cfs\r\n| | | | |-- _csd.si\r\n| | | | |-- _cxx.cfe\r\n| | | | |-- _cxx.cfs\r\n| | | | |-- _cxx.si\r\n| | | | |-- _cy7.cfe\r\n| | | | |-- _cy7.cfs\r\n| | | | |-- _cy7.si\r\n| | | | |-- _cy8.cfe\r\n| | | | |-- _cy8.cfs\r\n| | | | |-- _cy8.si\r\n| | | | |-- _cy9.cfe\r\n| | | | |-- _cy9.cfs\r\n| | | | |-- _cy9.si\r\n| | | | |-- _cya.cfe\r\n| | | | |-- _cya.cfs\r\n| | | | |-- _cya.si\r\n| | | | |-- _cyb.cfe\r\n| | | | |-- _cyb.cfs\r\n| | | | |-- _cyb.si\r\n| | | | |-- _cyc.cfe\r\n| | | | |-- _cyc.cfs\r\n| | | | |-- _cyc.si\r\n| | | | |-- _cyd.cfe\r\n| | | | |-- _cyd.cfs\r\n| | | | |-- _cyd.si\r\n| | | | |-- _cye.cfe\r\n| | | | |-- _cye.cfs\r\n| | | | |-- _cye.si\r\n| | | | |-- _sl.dii\r\n| | | | |-- _sl.dim\r\n| | | | |-- _sl.fdt\r\n| | | | |-- _sl.fdx\r\n| | | | |-- _sl.fnm\r\n| | | | |-- _sl.si\r\n| | | | |-- _sl_Lucene50_0.doc\r\n| | | | |-- _sl_Lucene50_0.pos\r\n| | | | |-- _sl_Lucene50_0.tim\r\n| | | | |-- _sl_Lucene50_0.tip\r\n| | | | |-- _sl_Lucene70_0.dvd\r\n| | | | |-- _sl_Lucene70_0.dvm\r\n| | | | |-- segments_a\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-344.st\r\n| |-- GCliM7kuQNyJPL2w8060nA\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-1.st\r\n| | | |-- index\r\n| | | | |-- _8v9.dii\r\n| | | | |-- _8v9.dim\r\n| | | | |-- _8v9.fdt\r\n| | | | |-- _8v9.fdx\r\n| | | | |-- _8v9.fnm\r\n| | | | |-- _8v9.si\r\n| | | | |-- _8v9_Lucene50_0.doc\r\n| | | | |-- _8v9_Lucene50_0.pos\r\n| | | | |-- _8v9_Lucene50_0.tim\r\n| | | | |-- _8v9_Lucene50_0.tip\r\n| | | | |-- _8v9_Lucene70_0.dvd\r\n| | | | |-- _8v9_Lucene70_0.dvm\r\n| | | | |-- _amw.dii\r\n| | | | |-- _amw.dim\r\n| | | | |-- _amw.fdt\r\n| | | | |-- _amw.fdx\r\n| | | | |-- _amw.fnm\r\n| | | | |-- _amw.si\r\n| | | | |-- _amw_Lucene50_0.doc\r\n| | | | |-- _amw_Lucene50_0.pos\r\n| | | | |-- _amw_Lucene50_0.tim\r\n| | | | |-- _amw_Lucene50_0.tip\r\n| | | | |-- _amw_Lucene70_0.dvd\r\n| | | | |-- _amw_Lucene70_0.dvm\r\n| | | | |-- _bsu.dii\r\n| | | | |-- _bsu.dim\r\n| | | | |-- _bsu.fdt\r\n| | | | |-- _bsu.fdx\r\n| | | | |-- _bsu.fnm\r\n| | | | |-- _bsu.si\r\n| | | | |-- _bsu_Lucene50_0.doc\r\n| | | | |-- _bsu_Lucene50_0.pos\r\n| | | | |-- _bsu_Lucene50_0.tim\r\n| | | | |-- _bsu_Lucene50_0.tip\r\n| | | | |-- _bsu_Lucene70_0.dvd\r\n| | | | |-- _bsu_Lucene70_0.dvm\r\n| | | | |-- _c6g.cfe\r\n| | | | |-- _c6g.cfs\r\n| | | | |-- _c6g.si\r\n| | | | |-- _c70.cfe\r\n| | | | |-- _c70.cfs\r\n| | | | |-- _c70.si\r\n| | | | |-- _c71.cfe\r\n| | | | |-- _c71.cfs\r\n| | | | |-- _c71.si\r\n| | | | |-- _c72.cfe\r\n| | | | |-- _c72.cfs\r\n| | | | |-- _c72.si\r\n| | | | |-- _c73.cfe\r\n| | | | |-- _c73.cfs\r\n| | | | |-- _c73.si\r\n| | | | |-- _c74.cfe\r\n| | | | |-- _c74.cfs\r\n| | | | |-- _c74.si\r\n| | | | |-- _c75.cfe\r\n| | | | |-- _c75.cfs\r\n| | | | |-- _c75.si\r\n| | | | |-- segments_4\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 3\r\n| | | |-- _state\r\n| | | | `-- state-1.st\r\n| | | |-- index\r\n| | | | |-- _8rn.dii\r\n| | | | |-- _8rn.dim\r\n| | | | |-- _8rn.fdt\r\n| | | | |-- _8rn.fdx\r\n| | | | |-- _8rn.fnm\r\n| | | | |-- _8rn.si\r\n| | | | |-- _8rn_Lucene50_0.doc\r\n| | | | |-- _8rn_Lucene50_0.pos\r\n| | | | |-- _8rn_Lucene50_0.tim\r\n| | | | |-- _8rn_Lucene50_0.tip\r\n| | | | |-- _8rn_Lucene70_0.dvd\r\n| | | | |-- _8rn_Lucene70_0.dvm\r\n| | | | |-- _afy.dii\r\n| | | | |-- _afy.dim\r\n| | | | |-- _afy.fdt\r\n| | | | |-- _afy.fdx\r\n| | | | |-- _afy.fnm\r\n| | | | |-- _afy.si\r\n| | | | |-- _afy_Lucene50_0.doc\r\n| | | | |-- _afy_Lucene50_0.pos\r\n| | | | |-- _afy_Lucene50_0.tim\r\n| | | | |-- _afy_Lucene50_0.tip\r\n| | | | |-- _afy_Lucene70_0.dvd\r\n| | | | |-- _afy_Lucene70_0.dvm\r\n| | | | |-- _b2g.dii\r\n| | | | |-- _b2g.dim\r\n| | | | |-- _b2g.fdt\r\n| | | | |-- _b2g.fdx\r\n| | | | |-- _b2g.fnm\r\n| | | | |-- _b2g.si\r\n| | | | |-- _b2g_Lucene50_0.doc\r\n| | | | |-- _b2g_Lucene50_0.pos\r\n| | | | |-- _b2g_Lucene50_0.tim\r\n| | | | |-- _b2g_Lucene50_0.tip\r\n| | | | |-- _b2g_Lucene70_0.dvd\r\n| | | | |-- _b2g_Lucene70_0.dvm\r\n| | | | |-- _c4i.cfe\r\n| | | | |-- _c4i.cfs\r\n| | | | |-- _c4i.si\r\n| | | | |-- _c7a.cfe\r\n| | | | |-- _c7a.cfs\r\n| | | | |-- _c7a.si\r\n| | | | |-- _c7b.cfe\r\n| | | | |-- _c7b.cfs\r\n| | | | |-- _c7b.si\r\n| | | | |-- _c7c.cfe\r\n| | | | |-- _c7c.cfs\r\n| | | | |-- _c7c.si\r\n| | | | |-- _c7d.cfe\r\n| | | | |-- _c7d.cfs\r\n| | | | |-- _c7d.si\r\n| | | | |-- _c7e.cfe\r\n| | | | |-- _c7e.cfs\r\n| | | | |-- _c7e.si\r\n| | | | |-- _c7f.cfe\r\n| | | | |-- _c7f.cfs\r\n| | | | |-- _c7f.si\r\n| | | | |-- segments_4\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 4\r\n| | | |-- _state\r\n| | | | `-- state-1.st\r\n| | | |-- index\r\n| | | | |-- _931.dii\r\n| | | | |-- _931.dim\r\n| | | | |-- _931.fdt\r\n| | | | |-- _931.fdx\r\n| | | | |-- _931.fnm\r\n| | | | |-- _931.si\r\n| | | | |-- _931_Lucene50_0.doc\r\n| | | | |-- _931_Lucene50_0.pos\r\n| | | | |-- _931_Lucene50_0.tim\r\n| | | | |-- _931_Lucene50_0.tip\r\n| | | | |-- _931_Lucene70_0.dvd\r\n| | | | |-- _931_Lucene70_0.dvm\r\n| | | | |-- _atk.dii\r\n| | | | |-- _atk.dim\r\n| | | | |-- _atk.fdt\r\n| | | | |-- _atk.fdx\r\n| | | | |-- _atk.fnm\r\n| | | | |-- _atk.si\r\n| | | | |-- _atk_Lucene50_0.doc\r\n| | | | |-- _atk_Lucene50_0.pos\r\n| | | | |-- _atk_Lucene50_0.tim\r\n| | | | |-- _atk_Lucene50_0.tip\r\n| | | | |-- _atk_Lucene70_0.dvd\r\n| | | | |-- _atk_Lucene70_0.dvm\r\n| | | | |-- _c6g.dii\r\n| | | | |-- _c6g.dim\r\n| | | | |-- _c6g.fdt\r\n| | | | |-- _c6g.fdx\r\n| | | | |-- _c6g.fnm\r\n| | | | |-- _c6g.si\r\n| | | | |-- _c6g_Lucene50_0.doc\r\n| | | | |-- _c6g_Lucene50_0.pos\r\n| | | | |-- _c6g_Lucene50_0.tim\r\n| | | | |-- _c6g_Lucene50_0.tip\r\n| | | | |-- _c6g_Lucene70_0.dvd\r\n| | | | |-- _c6g_Lucene70_0.dvm\r\n| | | | |-- _c7a.cfe\r\n| | | | |-- _c7a.cfs\r\n| | | | |-- _c7a.si\r\n| | | | |-- _c7k.cfe\r\n| | | | |-- _c7k.cfs\r\n| | | | |-- _c7k.si\r\n| | | | |-- _c7l.cfe\r\n| | | | |-- _c7l.cfs\r\n| | | | |-- _c7l.si\r\n| | | | |-- segments_5\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-25.st\r\n| |-- GuTek3p2RjmR8jGaTe6Hrw\r\n| | `-- _state\r\n| | `-- state-98.st\r\n| |-- LHo5QAgwSImYT-_WioKGkA\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-64.st\r\n| | | |-- index\r\n| | | | |-- _4a6.dii\r\n| | | | |-- _4a6.dim\r\n| | | | |-- _4a6.fdt\r\n| | | | |-- _4a6.fdx\r\n| | | | |-- _4a6.fnm\r\n| | | | |-- _4a6.si\r\n| | | | |-- _4a6_Lucene50_0.doc\r\n| | | | |-- _4a6_Lucene50_0.pos\r\n| | | | |-- _4a6_Lucene50_0.tim\r\n| | | | |-- _4a6_Lucene50_0.tip\r\n| | | | |-- _4a6_Lucene70_0.dvd\r\n| | | | |-- _4a6_Lucene70_0.dvm\r\n| | | | |-- _5hu.dii\r\n| | | | |-- _5hu.dim\r\n| | | | |-- _5hu.fdt\r\n| | | | |-- _5hu.fdx\r\n| | | | |-- _5hu.fnm\r\n| | | | |-- _5hu.si\r\n| | | | |-- _5hu_Lucene50_0.doc\r\n| | | | |-- _5hu_Lucene50_0.pos\r\n| | | | |-- _5hu_Lucene50_0.tim\r\n| | | | |-- _5hu_Lucene50_0.tip\r\n| | | | |-- _5hu_Lucene70_0.dvd\r\n| | | | |-- _5hu_Lucene70_0.dvm\r\n| | | | |-- _61k.cfe\r\n| | | | |-- _61k.cfs\r\n| | | | |-- _61k.si\r\n| | | | |-- _61l.cfe\r\n| | | | |-- _61l.cfs\r\n| | | | |-- _61l.si\r\n| | | | |-- _61m.cfe\r\n| | | | |-- _61m.cfs\r\n| | | | |-- _61m.si\r\n| | | | |-- _61n.cfe\r\n| | | | |-- _61n.cfs\r\n| | | | |-- _61n.si\r\n| | | | |-- _61o.cfe\r\n| | | | |-- _61o.cfs\r\n| | | | |-- _61o.si\r\n| | | | |-- _61p.cfe\r\n| | | | |-- _61p.cfs\r\n| | | | |-- _61p.si\r\n| | | | |-- _61q.cfe\r\n| | | | |-- _61q.cfs\r\n| | | | |-- _61q.si\r\n| | | | |-- segments_1z\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 2\r\n| | | |-- _state\r\n| | | | `-- state-64.st\r\n| | | |-- index\r\n| | | | |-- _4hy.dii\r\n| | | | |-- _4hy.dim\r\n| | | | |-- _4hy.fdt\r\n| | | | |-- _4hy.fdx\r\n| | | | |-- _4hy.fnm\r\n| | | | |-- _4hy.si\r\n| | | | |-- _4hy_Lucene50_0.doc\r\n| | | | |-- _4hy_Lucene50_0.pos\r\n| | | | |-- _4hy_Lucene50_0.tim\r\n| | | | |-- _4hy_Lucene50_0.tip\r\n| | | | |-- _4hy_Lucene70_0.dvd\r\n| | | | |-- _4hy_Lucene70_0.dvm\r\n| | | | |-- _5no.dii\r\n| | | | |-- _5no.dim\r\n| | | | |-- _5no.fdt\r\n| | | | |-- _5no.fdx\r\n| | | | |-- _5no.fnm\r\n| | | | |-- _5no.si\r\n| | | | |-- _5no_Lucene50_0.doc\r\n| | | | |-- _5no_Lucene50_0.pos\r\n| | | | |-- _5no_Lucene50_0.tim\r\n| | | | |-- _5no_Lucene50_0.tip\r\n| | | | |-- _5no_Lucene70_0.dvd\r\n| | | | |-- _5no_Lucene70_0.dvm\r\n| | | | |-- _610.cfe\r\n| | | | |-- _610.cfs\r\n| | | | |-- _610.si\r\n| | | | |-- _611.cfe\r\n| | | | |-- _611.cfs\r\n| | | | |-- _611.si\r\n| | | | |-- segments_21\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 3\r\n| | | |-- _state\r\n| | | | `-- state-64.st\r\n| | | |-- index\r\n| | | | |-- _4l0.dii\r\n| | | | |-- _4l0.dim\r\n| | | | |-- _4l0.fdt\r\n| | | | |-- _4l0.fdx\r\n| | | | |-- _4l0.fnm\r\n| | | | |-- _4l0.si\r\n| | | | |-- _4l0_Lucene50_0.doc\r\n| | | | |-- _4l0_Lucene50_0.pos\r\n| | | | |-- _4l0_Lucene50_0.tim\r\n| | | | |-- _4l0_Lucene50_0.tip\r\n| | | | |-- _4l0_Lucene70_0.dvd\r\n| | | | |-- _4l0_Lucene70_0.dvm\r\n| | | | |-- _5mu.dii\r\n| | | | |-- _5mu.dim\r\n| | | | |-- _5mu.fdt\r\n| | | | |-- _5mu.fdx\r\n| | | | |-- _5mu.fnm\r\n| | | | |-- _5mu.si\r\n| | | | |-- _5mu_Lucene50_0.doc\r\n| | | | |-- _5mu_Lucene50_0.pos\r\n| | | | |-- _5mu_Lucene50_0.tim\r\n| | | | |-- _5mu_Lucene50_0.tip\r\n| | | | |-- _5mu_Lucene70_0.dvd\r\n| | | | |-- _5mu_Lucene70_0.dvm\r\n| | | | |-- _61a.cfe\r\n| | | | |-- _61a.cfs\r\n| | | | |-- _61a.si\r\n| | | | |-- _61b.cfe\r\n| | | | |-- _61b.cfs\r\n| | | | |-- _61b.si\r\n| | | | |-- _61c.cfe\r\n| | | | |-- _61c.cfs\r\n| | | | |-- _61c.si\r\n| | | | |-- segments_23\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-100.ckp\r\n| | | |-- translog-100.tlog\r\n| | | |-- translog-101.ckp\r\n| | | |-- translog-101.tlog\r\n| | | |-- translog-102.tlog\r\n| | | |-- translog-38.ckp\r\n| | | |-- translog-38.tlog\r\n| | | |-- translog-39.ckp\r\n| | | |-- translog-39.tlog\r\n| | | |-- translog-40.ckp\r\n| | | |-- translog-40.tlog\r\n| | | |-- translog-41.ckp\r\n| | | |-- translog-41.tlog\r\n| | | |-- translog-42.ckp\r\n| | | |-- translog-42.tlog\r\n| | | |-- translog-43.ckp\r\n| | | |-- translog-43.tlog\r\n| | | |-- translog-44.ckp\r\n| | | |-- translog-44.tlog\r\n| | | |-- translog-45.ckp\r\n| | | |-- translog-45.tlog\r\n| | | |-- translog-46.ckp\r\n| | | |-- translog-46.tlog\r\n| | | |-- translog-47.ckp\r\n| | | |-- translog-47.tlog\r\n| | | |-- translog-48.ckp\r\n| | | |-- translog-48.tlog\r\n| | | |-- translog-49.ckp\r\n| | | |-- translog-49.tlog\r\n| | | |-- translog-50.ckp\r\n| | | |-- translog-50.tlog\r\n| | | |-- translog-51.ckp\r\n| | | |-- translog-51.tlog\r\n| | | |-- translog-52.ckp\r\n| | | |-- translog-52.tlog\r\n| | | |-- translog-53.ckp\r\n| | | |-- translog-53.tlog\r\n| | | |-- translog-54.ckp\r\n| | | |-- translog-54.tlog\r\n| | | |-- translog-55.ckp\r\n| | | |-- translog-55.tlog\r\n| | | |-- translog-56.ckp\r\n| | | |-- translog-56.tlog\r\n| | | |-- translog-57.ckp\r\n| | | |-- translog-57.tlog\r\n| | | |-- translog-58.ckp\r\n| | | |-- translog-58.tlog\r\n| | | |-- translog-59.ckp\r\n| | | |-- translog-59.tlog\r\n| | | |-- translog-60.ckp\r\n| | | |-- translog-60.tlog\r\n| | | |-- translog-61.ckp\r\n| | | |-- translog-61.tlog\r\n| | | |-- translog-62.ckp\r\n| | | |-- translog-62.tlog\r\n| | | |-- translog-63.ckp\r\n| | | |-- translog-63.tlog\r\n| | | |-- translog-64.ckp\r\n| | | |-- translog-64.tlog\r\n| | | |-- translog-65.ckp\r\n| | | |-- translog-65.tlog\r\n| | | |-- translog-66.ckp\r\n| | | |-- translog-66.tlog\r\n| | | |-- translog-67.ckp\r\n| | | |-- translog-67.tlog\r\n| | | |-- translog-68.ckp\r\n| | | |-- translog-68.tlog\r\n| | | |-- translog-69.ckp\r\n| | | |-- translog-69.tlog\r\n| | | |-- translog-70.ckp\r\n| | | |-- translog-70.tlog\r\n| | | |-- translog-71.ckp\r\n| | | |-- translog-71.tlog\r\n| | | |-- translog-72.ckp\r\n| | | |-- translog-72.tlog\r\n| | | |-- translog-73.ckp\r\n| | | |-- translog-73.tlog\r\n| | | |-- translog-74.ckp\r\n| | | |-- translog-74.tlog\r\n| | | |-- translog-75.ckp\r\n| | | |-- translog-75.tlog\r\n| | | |-- translog-76.ckp\r\n| | | |-- translog-76.tlog\r\n| | | |-- translog-77.ckp\r\n| | | |-- translog-77.tlog\r\n| | | |-- translog-78.ckp\r\n| | | |-- translog-78.tlog\r\n| | | |-- translog-79.ckp\r\n| | | |-- translog-79.tlog\r\n| | | |-- translog-80.ckp\r\n| | | |-- translog-80.tlog\r\n| | | |-- translog-81.ckp\r\n| | | |-- translog-81.tlog\r\n| | | |-- translog-82.ckp\r\n| | | |-- translog-82.tlog\r\n| | | |-- translog-83.ckp\r\n| | | |-- translog-83.tlog\r\n| | | |-- translog-84.ckp\r\n| | | |-- translog-84.tlog\r\n| | | |-- translog-85.ckp\r\n| | | |-- translog-85.tlog\r\n| | | |-- translog-86.ckp\r\n| | | |-- translog-86.tlog\r\n| | | |-- translog-87.ckp\r\n| | | |-- translog-87.tlog\r\n| | | |-- translog-88.ckp\r\n| | | |-- translog-88.tlog\r\n| | | |-- translog-89.ckp\r\n| | | |-- translog-89.tlog\r\n| | | |-- translog-90.ckp\r\n| | | |-- translog-90.tlog\r\n| | | |-- translog-91.ckp\r\n| | | |-- translog-91.tlog\r\n| | | |-- translog-92.ckp\r\n| | | |-- translog-92.tlog\r\n| | | |-- translog-93.ckp\r\n| | | |-- translog-93.tlog\r\n| | | |-- translog-94.ckp\r\n| | | |-- translog-94.tlog\r\n| | | |-- translog-95.ckp\r\n| | | |-- translog-95.tlog\r\n| | | |-- translog-96.ckp\r\n| | | |-- translog-96.tlog\r\n| | | |-- translog-97.ckp\r\n| | | |-- translog-97.tlog\r\n| | | |-- translog-98.ckp\r\n| | | |-- translog-98.tlog\r\n| | | |-- translog-99.ckp\r\n| | | |-- translog-99.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 4\r\n| | | |-- _state\r\n| | | | `-- state-65.st\r\n| | | |-- index\r\n| | | | |-- _4lk.dii\r\n| | | | |-- _4lk.dim\r\n| | | | |-- _4lk.fdt\r\n| | | | |-- _4lk.fdx\r\n| | | | |-- _4lk.fnm\r\n| | | | |-- _4lk.si\r\n| | | | |-- _4lk_Lucene50_0.doc\r\n| | | | |-- _4lk_Lucene50_0.pos\r\n| | | | |-- _4lk_Lucene50_0.tim\r\n| | | | |-- _4lk_Lucene50_0.tip\r\n| | | | |-- _4lk_Lucene70_0.dvd\r\n| | | | |-- _4lk_Lucene70_0.dvm\r\n| | | | |-- _5mu.dii\r\n| | | | |-- _5mu.dim\r\n| | | | |-- _5mu.fdt\r\n| | | | |-- _5mu.fdx\r\n| | | | |-- _5mu.fnm\r\n| | | | |-- _5mu.si\r\n| | | | |-- _5mu_Lucene50_0.doc\r\n| | | | |-- _5mu_Lucene50_0.pos\r\n| | | | |-- _5mu_Lucene50_0.tim\r\n| | | | |-- _5mu_Lucene50_0.tip\r\n| | | | |-- _5mu_Lucene70_0.dvd\r\n| | | | |-- _5mu_Lucene70_0.dvm\r\n| | | | |-- _610.cfe\r\n| | | | |-- _610.cfs\r\n| | | | |-- _610.si\r\n| | | | |-- _611.cfe\r\n| | | | |-- _611.cfs\r\n| | | | |-- _611.si\r\n| | | | |-- _612.cfe\r\n| | | | |-- _612.cfs\r\n| | | | |-- _612.si\r\n| | | | |-- segments_25\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-330.st\r\n| |-- R56iVv4FQT-ZmwGGGR8PyA\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _aax.dii\r\n| | | | |-- _aax.dim\r\n| | | | |-- _aax.fdt\r\n| | | | |-- _aax.fdx\r\n| | | | |-- _aax.fnm\r\n| | | | |-- _aax.si\r\n| | | | |-- _aax_Lucene50_0.doc\r\n| | | | |-- _aax_Lucene50_0.tim\r\n| | | | |-- _aax_Lucene50_0.tip\r\n| | | | |-- _aax_Lucene70_0.dvd\r\n| | | | |-- _aax_Lucene70_0.dvm\r\n| | | | |-- _cbq.dii\r\n| | | | |-- _cbq.dim\r\n| | | | |-- _cbq.fdt\r\n| | | | |-- _cbq.fdx\r\n| | | | |-- _cbq.fnm\r\n| | | | |-- _cbq.si\r\n| | | | |-- _cbq_Lucene50_0.doc\r\n| | | | |-- _cbq_Lucene50_0.tim\r\n| | | | |-- _cbq_Lucene50_0.tip\r\n| | | | |-- _cbq_Lucene70_0.dvd\r\n| | | | |-- _cbq_Lucene70_0.dvm\r\n| | | | |-- _ci4.cfe\r\n| | | | |-- _ci4.cfs\r\n| | | | |-- _ci4.si\r\n| | | | |-- _cie.cfe\r\n| | | | |-- _cie.cfs\r\n| | | | |-- _cie.si\r\n| | | | |-- _cio.cfe\r\n| | | | |-- _cio.cfs\r\n| | | | |-- _cio.si\r\n| | | | |-- _cip.cfe\r\n| | | | |-- _cip.cfs\r\n| | | | |-- _cip.si\r\n| | | | |-- _ciq.cfe\r\n| | | | |-- _ciq.cfs\r\n| | | | |-- _ciq.si\r\n| | | | |-- segments_1\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.ckp\r\n| | | |-- translog-1.tlog\r\n| | | |-- translog-2.ckp\r\n| | | |-- translog-2.tlog\r\n| | | |-- translog-3.ckp\r\n| | | |-- translog-3.tlog\r\n| | | |-- translog-4.ckp\r\n| | | |-- translog-4.tlog\r\n| | | |-- translog-5.ckp\r\n| | | |-- translog-5.tlog\r\n| | | |-- translog-6.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-2.st\r\n| |-- YACFcRiJQLOvRHS5-EB2yg\r\n| | `-- _state\r\n| | `-- state-2.st\r\n| |-- eG48bcLuSv6V62EOz8dGYw\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _1mf.dii\r\n| | | | |-- _1mf.dim\r\n| | | | |-- _1mf.fdt\r\n| | | | |-- _1mf.fdx\r\n| | | | |-- _1mf.fnm\r\n| | | | |-- _1mf.si\r\n| | | | |-- _1mf_Lucene50_0.doc\r\n| | | | |-- _1mf_Lucene50_0.pos\r\n| | | | |-- _1mf_Lucene50_0.tim\r\n| | | | |-- _1mf_Lucene50_0.tip\r\n| | | | |-- _1mf_Lucene70_0.dvd\r\n| | | | |-- _1mf_Lucene70_0.dvm\r\n| | | | |-- _2sd.cfe\r\n| | | | |-- _2sd.cfs\r\n| | | | |-- _2sd.si\r\n| | | | |-- _2sn.dii\r\n| | | | |-- _2sn.dim\r\n| | | | |-- _2sn.fdt\r\n| | | | |-- _2sn.fdx\r\n| | | | |-- _2sn.fnm\r\n| | | | |-- _2sn.si\r\n| | | | |-- _2sn_Lucene50_0.doc\r\n| | | | |-- _2sn_Lucene50_0.pos\r\n| | | | |-- _2sn_Lucene50_0.tim\r\n| | | | |-- _2sn_Lucene50_0.tip\r\n| | | | |-- _2sn_Lucene70_0.dvd\r\n| | | | |-- _2sn_Lucene70_0.dvm\r\n| | | | |-- _34v.cfe\r\n| | | | |-- _34v.cfs\r\n| | | | |-- _34v.si\r\n| | | | |-- _38r.cfe\r\n| | | | |-- _38r.cfs\r\n| | | | |-- _38r.si\r\n| | | | |-- _39b.cfe\r\n| | | | |-- _39b.cfs\r\n| | | | |-- _39b.si\r\n| | | | |-- _3b9.cfe\r\n| | | | |-- _3b9.cfs\r\n| | | | |-- _3b9.si\r\n| | | | |-- _3g9.cfe\r\n| | | | |-- _3g9.cfs\r\n| | | | |-- _3g9.si\r\n| | | | |-- _4xv.cfe\r\n| | | | |-- _4xv.cfs\r\n| | | | |-- _4xv.si\r\n| | | | |-- _6bb.cfe\r\n| | | | |-- _6bb.cfs\r\n| | | | |-- _6bb.si\r\n| | | | |-- _7h9.cfe\r\n| | | | |-- _7h9.cfs\r\n| | | | |-- _7h9.si\r\n| | | | |-- _7hj.cfe\r\n| | | | |-- _7hj.cfs\r\n| | | | |-- _7hj.si\r\n| | | | |-- _7hk.cfe\r\n| | | | |-- _7hk.cfs\r\n| | | | |-- _7hk.si\r\n| | | | |-- _7hl.cfe\r\n| | | | |-- _7hl.cfs\r\n| | | | |-- _7hl.si\r\n| | | | |-- _7hm.cfe\r\n| | | | |-- _7hm.cfs\r\n| | | | |-- _7hm.si\r\n| | | | |-- _7hn.cfe\r\n| | | | |-- _7hn.cfs\r\n| | | | |-- _7hn.si\r\n| | | | |-- _7ho.cfe\r\n| | | | |-- _7ho.cfs\r\n| | | | |-- _7ho.si\r\n| | | | |-- _7hp.cfe\r\n| | | | |-- _7hp.cfs\r\n| | | | |-- _7hp.si\r\n| | | | |-- _9l.dii\r\n| | | | |-- _9l.dim\r\n| | | | |-- _9l.fdt\r\n| | | | |-- _9l.fdx\r\n| | | | |-- _9l.fnm\r\n| | | | |-- _9l.si\r\n| | | | |-- _9l_Lucene50_0.doc\r\n| | | | |-- _9l_Lucene50_0.pos\r\n| | | | |-- _9l_Lucene50_0.tim\r\n| | | | |-- _9l_Lucene50_0.tip\r\n| | | | |-- _9l_Lucene70_0.dvd\r\n| | | | |-- _9l_Lucene70_0.dvm\r\n| | | | |-- _u1.dii\r\n| | | | |-- _u1.dim\r\n| | | | |-- _u1.fdt\r\n| | | | |-- _u1.fdx\r\n| | | | |-- _u1.fnm\r\n| | | | |-- _u1.si\r\n| | | | |-- _u1_Lucene50_0.doc\r\n| | | | |-- _u1_Lucene50_0.pos\r\n| | | | |-- _u1_Lucene50_0.tim\r\n| | | | |-- _u1_Lucene50_0.tip\r\n| | | | |-- _u1_Lucene70_0.dvd\r\n| | | | |-- _u1_Lucene70_0.dvm\r\n| | | | |-- segments_1h\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-45.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 1\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _2av.cfe\r\n| | | | |-- _2av.cfs\r\n| | | | |-- _2av.si\r\n| | | | |-- _2b5.dii\r\n| | | | |-- _2b5.dim\r\n| | | | |-- _2b5.fdt\r\n| | | | |-- _2b5.fdx\r\n| | | | |-- _2b5.fnm\r\n| | | | |-- _2b5.si\r\n| | | | |-- _2b5_Lucene50_0.doc\r\n| | | | |-- _2b5_Lucene50_0.pos\r\n| | | | |-- _2b5_Lucene50_0.tim\r\n| | | | |-- _2b5_Lucene50_0.tip\r\n| | | | |-- _2b5_Lucene70_0.dvd\r\n| | | | |-- _2b5_Lucene70_0.dvm\r\n| | | | |-- _2ul.cfe\r\n| | | | |-- _2ul.cfs\r\n| | | | |-- _2ul.si\r\n| | | | |-- _2wm.cfe\r\n| | | | |-- _2wm.cfs\r\n| | | | |-- _2wm.si\r\n| | | | |-- _3az.cfe\r\n| | | | |-- _3az.cfs\r\n| | | | |-- _3az.si\r\n| | | | |-- _3ev.cfe\r\n| | | | |-- _3ev.cfs\r\n| | | | |-- _3ev.si\r\n| | | | |-- _4hs.cfe\r\n| | | | |-- _4hs.cfs\r\n| | | | |-- _4hs.si\r\n| | | | |-- _5gh.cfe\r\n| | | | |-- _5gh.cfs\r\n| | | | |-- _5gh.si\r\n| | | | |-- _5sp.cfe\r\n| | | | |-- _5sp.cfs\r\n| | | | |-- _5sp.si\r\n| | | | |-- _6t3.cfe\r\n| | | | |-- _6t3.cfs\r\n| | | | |-- _6t3.si\r\n| | | | |-- _7id.cfe\r\n| | | | |-- _7id.cfs\r\n| | | | |-- _7id.si\r\n| | | | |-- _7ie.cfe\r\n| | | | |-- _7ie.cfs\r\n| | | | |-- _7ie.si\r\n| | | | |-- _7if.cfe\r\n| | | | |-- _7if.cfs\r\n| | | | |-- _7if.si\r\n| | | | |-- _85.dii\r\n| | | | |-- _85.dim\r\n| | | | |-- _85.fdt\r\n| | | | |-- _85.fdx\r\n| | | | |-- _85.fnm\r\n| | | | |-- _85.si\r\n| | | | |-- _85_Lucene50_0.doc\r\n| | | | |-- _85_Lucene50_0.pos\r\n| | | | |-- _85_Lucene50_0.tim\r\n| | | | |-- _85_Lucene50_0.tip\r\n| | | | |-- _85_Lucene70_0.dvd\r\n| | | | |-- _85_Lucene70_0.dvm\r\n| | | | |-- _sd.dii\r\n| | | | |-- _sd.dim\r\n| | | | |-- _sd.fdt\r\n| | | | |-- _sd.fdx\r\n| | | | |-- _sd.fnm\r\n| | | | |-- _sd.si\r\n| | | | |-- _sd_Lucene50_0.doc\r\n| | | | |-- _sd_Lucene50_0.pos\r\n| | | | |-- _sd_Lucene50_0.tim\r\n| | | | |-- _sd_Lucene50_0.tip\r\n| | | | |-- _sd_Lucene70_0.dvd\r\n| | | | |-- _sd_Lucene70_0.dvm\r\n| | | | |-- _xy.dii\r\n| | | | |-- _xy.dim\r\n| | | | |-- _xy.fdt\r\n| | | | |-- _xy.fdx\r\n| | | | |-- _xy.fnm\r\n| | | | |-- _xy.si\r\n| | | | |-- _xy_Lucene50_0.doc\r\n| | | | |-- _xy_Lucene50_0.pos\r\n| | | | |-- _xy_Lucene50_0.tim\r\n| | | | |-- _xy_Lucene50_0.tip\r\n| | | | |-- _xy_Lucene70_0.dvd\r\n| | | | |-- _xy_Lucene70_0.dvm\r\n| | | | |-- segments_1f\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-44.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 2\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _17e.dii\r\n| | | | |-- _17e.dim\r\n| | | | |-- _17e.fdt\r\n| | | | |-- _17e.fdx\r\n| | | | |-- _17e.fnm\r\n| | | | |-- _17e.si\r\n| | | | |-- _17e_Lucene50_0.doc\r\n| | | | |-- _17e_Lucene50_0.pos\r\n| | | | |-- _17e_Lucene50_0.tim\r\n| | | | |-- _17e_Lucene50_0.tip\r\n| | | | |-- _17e_Lucene70_0.dvd\r\n| | | | |-- _17e_Lucene70_0.dvm\r\n| | | | |-- _1fr.dii\r\n| | | | |-- _1fr.dim\r\n| | | | |-- _1fr.fdt\r\n| | | | |-- _1fr.fdx\r\n| | | | |-- _1fr.fnm\r\n| | | | |-- _1fr.si\r\n| | | | |-- _1fr_Lucene50_0.doc\r\n| | | | |-- _1fr_Lucene50_0.pos\r\n| | | | |-- _1fr_Lucene50_0.tim\r\n| | | | |-- _1fr_Lucene50_0.tip\r\n| | | | |-- _1fr_Lucene70_0.dvd\r\n| | | | |-- _1fr_Lucene70_0.dvm\r\n| | | | |-- _1l1.cfe\r\n| | | | |-- _1l1.cfs\r\n| | | | |-- _1l1.si\r\n| | | | |-- _2wt.dii\r\n| | | | |-- _2wt.dim\r\n| | | | |-- _2wt.fdt\r\n| | | | |-- _2wt.fdx\r\n| | | | |-- _2wt.fnm\r\n| | | | |-- _2wt.si\r\n| | | | |-- _2wt_Lucene50_0.doc\r\n| | | | |-- _2wt_Lucene50_0.pos\r\n| | | | |-- _2wt_Lucene50_0.tim\r\n| | | | |-- _2wt_Lucene50_0.tip\r\n| | | | |-- _2wt_Lucene70_0.dvd\r\n| | | | |-- _2wt_Lucene70_0.dvm\r\n| | | | |-- _35f.cfe\r\n| | | | |-- _35f.cfs\r\n| | | | |-- _35f.si\r\n| | | | |-- _3az.cfe\r\n| | | | |-- _3az.cfs\r\n| | | | |-- _3az.si\r\n| | | | |-- _3ir.cfe\r\n| | | | |-- _3ir.cfs\r\n| | | | |-- _3ir.si\r\n| | | | |-- _4y5.cfe\r\n| | | | |-- _4y5.cfs\r\n| | | | |-- _4y5.si\r\n| | | | |-- _6en.cfe\r\n| | | | |-- _6en.cfs\r\n| | | | |-- _6en.si\r\n| | | | |-- _7h9.cfe\r\n| | | | |-- _7h9.cfs\r\n| | | | |-- _7h9.si\r\n| | | | |-- _7hj.cfe\r\n| | | | |-- _7hj.cfs\r\n| | | | |-- _7hj.si\r\n| | | | |-- _7hk.cfe\r\n| | | | |-- _7hk.cfs\r\n| | | | |-- _7hk.si\r\n| | | | |-- _7hl.cfe\r\n| | | | |-- _7hl.cfs\r\n| | | | |-- _7hl.si\r\n| | | | |-- _8f.dii\r\n| | | | |-- _8f.dim\r\n| | | | |-- _8f.fdt\r\n| | | | |-- _8f.fdx\r\n| | | | |-- _8f.fnm\r\n| | | | |-- _8f.si\r\n| | | | |-- _8f_Lucene50_0.doc\r\n| | | | |-- _8f_Lucene50_0.pos\r\n| | | | |-- _8f_Lucene50_0.tim\r\n| | | | |-- _8f_Lucene50_0.tip\r\n| | | | |-- _8f_Lucene70_0.dvd\r\n| | | | |-- _8f_Lucene70_0.dvm\r\n| | | | |-- _jd.dii\r\n| | | | |-- _jd.dim\r\n| | | | |-- _jd.fdt\r\n| | | | |-- _jd.fdx\r\n| | | | |-- _jd.fnm\r\n| | | | |-- _jd.si\r\n| | | | |-- _jd_Lucene50_0.doc\r\n| | | | |-- _jd_Lucene50_0.pos\r\n| | | | |-- _jd_Lucene50_0.tim\r\n| | | | |-- _jd_Lucene50_0.tip\r\n| | | | |-- _jd_Lucene70_0.dvd\r\n| | | | |-- _jd_Lucene70_0.dvm\r\n| | | | |-- _wk.dii\r\n| | | | |-- _wk.dim\r\n| | | | |-- _wk.fdt\r\n| | | | |-- _wk.fdx\r\n| | | | |-- _wk.fnm\r\n| | | | |-- _wk.si\r\n| | | | |-- _wk_Lucene50_0.doc\r\n| | | | |-- _wk_Lucene50_0.pos\r\n| | | | |-- _wk_Lucene50_0.tim\r\n| | | | |-- _wk_Lucene50_0.tip\r\n| | | | |-- _wk_Lucene70_0.dvd\r\n| | | | |-- _wk_Lucene70_0.dvm\r\n| | | | |-- segments_1f\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-44.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 3\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _1ew.dii\r\n| | | | |-- _1ew.dim\r\n| | | | |-- _1ew.fdt\r\n| | | | |-- _1ew.fdx\r\n| | | | |-- _1ew.fnm\r\n| | | | |-- _1ew.si\r\n| | | | |-- _1ew_Lucene50_0.doc\r\n| | | | |-- _1ew_Lucene50_0.pos\r\n| | | | |-- _1ew_Lucene50_0.tim\r\n| | | | |-- _1ew_Lucene50_0.tip\r\n| | | | |-- _1ew_Lucene70_0.dvd\r\n| | | | |-- _1ew_Lucene70_0.dvm\r\n| | | | |-- _24h.dii\r\n| | | | |-- _24h.dim\r\n| | | | |-- _24h.fdt\r\n| | | | |-- _24h.fdx\r\n| | | | |-- _24h.fnm\r\n| | | | |-- _24h.si\r\n| | | | |-- _24h_Lucene50_0.doc\r\n| | | | |-- _24h_Lucene50_0.pos\r\n| | | | |-- _24h_Lucene50_0.tim\r\n| | | | |-- _24h_Lucene50_0.tip\r\n| | | | |-- _24h_Lucene70_0.dvd\r\n| | | | |-- _24h_Lucene70_0.dvm\r\n| | | | |-- _2n3.cfe\r\n| | | | |-- _2n3.cfs\r\n| | | | |-- _2n3.si\r\n| | | | |-- _305.cfe\r\n| | | | |-- _305.cfs\r\n| | | | |-- _305.si\r\n| | | | |-- _3ap.cfe\r\n| | | | |-- _3ap.cfs\r\n| | | | |-- _3ap.si\r\n| | | | |-- _5v7.cfe\r\n| | | | |-- _5v7.cfs\r\n| | | | |-- _5v7.si\r\n| | | | |-- _7gz.cfe\r\n| | | | |-- _7gz.cfs\r\n| | | | |-- _7gz.si\r\n| | | | |-- _7hj.cfe\r\n| | | | |-- _7hj.cfs\r\n| | | | |-- _7hj.si\r\n| | | | |-- _7ht.cfe\r\n| | | | |-- _7ht.cfs\r\n| | | | |-- _7ht.si\r\n| | | | |-- _7i3.cfe\r\n| | | | |-- _7i3.cfs\r\n| | | | |-- _7i3.si\r\n| | | | |-- _7id.cfe\r\n| | | | |-- _7id.cfs\r\n| | | | |-- _7id.si\r\n| | | | |-- _7ie.cfe\r\n| | | | |-- _7ie.cfs\r\n| | | | |-- _7ie.si\r\n| | | | |-- _7if.cfe\r\n| | | | |-- _7if.cfs\r\n| | | | |-- _7if.si\r\n| | | | |-- _7ig.cfe\r\n| | | | |-- _7ig.cfs\r\n| | | | |-- _7ig.si\r\n| | | | |-- _7ih.cfe\r\n| | | | |-- _7ih.cfs\r\n| | | | |-- _7ih.si\r\n| | | | |-- _7ii.cfe\r\n| | | | |-- _7ii.cfs\r\n| | | | |-- _7ii.si\r\n| | | | |-- _ay.dii\r\n| | | | |-- _ay.dim\r\n| | | | |-- _ay.fdt\r\n| | | | |-- _ay.fdx\r\n| | | | |-- _ay.fnm\r\n| | | | |-- _ay.si\r\n| | | | |-- _ay_Lucene50_0.doc\r\n| | | | |-- _ay_Lucene50_0.pos\r\n| | | | |-- _ay_Lucene50_0.tim\r\n| | | | |-- _ay_Lucene50_0.tip\r\n| | | | |-- _ay_Lucene70_0.dvd\r\n| | | | |-- _ay_Lucene70_0.dvm\r\n| | | | |-- _sn.dii\r\n| | | | |-- _sn.dim\r\n| | | | |-- _sn.fdt\r\n| | | | |-- _sn.fdx\r\n| | | | |-- _sn.fnm\r\n| | | | |-- _sn.si\r\n| | | | |-- _sn_Lucene50_0.doc\r\n| | | | |-- _sn_Lucene50_0.pos\r\n| | | | |-- _sn_Lucene50_0.tim\r\n| | | | |-- _sn_Lucene50_0.tip\r\n| | | | |-- _sn_Lucene70_0.dvd\r\n| | | | |-- _sn_Lucene70_0.dvm\r\n| | | | |-- _yi.dii\r\n| | | | |-- _yi.dim\r\n| | | | |-- _yi.fdt\r\n| | | | |-- _yi.fdx\r\n| | | | |-- _yi.fnm\r\n| | | | |-- _yi.si\r\n| | | | |-- _yi_Lucene50_0.doc\r\n| | | | |-- _yi_Lucene50_0.pos\r\n| | | | |-- _yi_Lucene50_0.tim\r\n| | | | |-- _yi_Lucene50_0.tip\r\n| | | | |-- _yi_Lucene70_0.dvd\r\n| | | | |-- _yi_Lucene70_0.dvm\r\n| | | | |-- segments_1f\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-44.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 4\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _2zl.cfe\r\n| | | | |-- _2zl.cfs\r\n| | | | |-- _2zl.si\r\n| | | | |-- _3g9.cfe\r\n| | | | |-- _3g9.cfs\r\n| | | | |-- _3g9.si\r\n| | | | |-- _6cp.cfe\r\n| | | | |-- _6cp.cfs\r\n| | | | |-- _6cp.si\r\n| | | | |-- _6cz.dii\r\n| | | | |-- _6cz.dim\r\n| | | | |-- _6cz.fdt\r\n| | | | |-- _6cz.fdx\r\n| | | | |-- _6cz.fnm\r\n| | | | |-- _6cz.si\r\n| | | | |-- _6cz_Lucene50_0.doc\r\n| | | | |-- _6cz_Lucene50_0.pos\r\n| | | | |-- _6cz_Lucene50_0.tim\r\n| | | | |-- _6cz_Lucene50_0.tip\r\n| | | | |-- _6cz_Lucene70_0.dvd\r\n| | | | |-- _6cz_Lucene70_0.dvm\r\n| | | | |-- _7ct.cfe\r\n| | | | |-- _7ct.cfs\r\n| | | | |-- _7ct.si\r\n| | | | |-- _7f1.cfe\r\n| | | | |-- _7f1.cfs\r\n| | | | |-- _7f1.si\r\n| | | | |-- _7fl.cfe\r\n| | | | |-- _7fl.cfs\r\n| | | | |-- _7fl.si\r\n| | | | |-- _7g5.cfe\r\n| | | | |-- _7g5.cfs\r\n| | | | |-- _7g5.si\r\n| | | | |-- _7gf.cfe\r\n| | | | |-- _7gf.cfs\r\n| | | | |-- _7gf.si\r\n| | | | |-- _7gp.cfe\r\n| | | | |-- _7gp.cfs\r\n| | | | |-- _7gp.si\r\n| | | | |-- _7gz.cfe\r\n| | | | |-- _7gz.cfs\r\n| | | | |-- _7gz.si\r\n| | | | |-- _7h9.cfe\r\n| | | | |-- _7h9.cfs\r\n| | | | |-- _7h9.si\r\n| | | | |-- _7hj.cfe\r\n| | | | |-- _7hj.cfs\r\n| | | | |-- _7hj.si\r\n| | | | |-- _7hk.cfe\r\n| | | | |-- _7hk.cfs\r\n| | | | |-- _7hk.si\r\n| | | | |-- _7hl.cfe\r\n| | | | |-- _7hl.cfs\r\n| | | | |-- _7hl.si\r\n| | | | |-- _7hm.cfe\r\n| | | | |-- _7hm.cfs\r\n| | | | |-- _7hm.si\r\n| | | | |-- _7hn.cfe\r\n| | | | |-- _7hn.cfs\r\n| | | | |-- _7hn.si\r\n| | | | |-- _7ho.cfe\r\n| | | | |-- _7ho.cfs\r\n| | | | |-- _7ho.si\r\n| | | | |-- _7hp.cfe\r\n| | | | |-- _7hp.cfs\r\n| | | | |-- _7hp.si\r\n| | | | |-- _7hq.cfe\r\n| | | | |-- _7hq.cfs\r\n| | | | |-- _7hq.si\r\n| | | | |-- _7hr.cfe\r\n| | | | |-- _7hr.cfs\r\n| | | | |-- _7hr.si\r\n| | | | |-- _x4.dii\r\n| | | | |-- _x4.dim\r\n| | | | |-- _x4.fdt\r\n| | | | |-- _x4.fdx\r\n| | | | |-- _x4.fnm\r\n| | | | |-- _x4.si\r\n| | | | |-- _x4_Lucene50_0.doc\r\n| | | | |-- _x4_Lucene50_0.pos\r\n| | | | |-- _x4_Lucene50_0.tim\r\n| | | | |-- _x4_Lucene50_0.tip\r\n| | | | |-- _x4_Lucene70_0.dvd\r\n| | | | |-- _x4_Lucene70_0.dvm\r\n| | | | |-- segments_1d\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-43.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-7.st\r\n| |-- i8wudibrTVWZOJTgqgMUgg\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _1gd.dii\r\n| | | | |-- _1gd.dim\r\n| | | | |-- _1gd.fdt\r\n| | | | |-- _1gd.fdx\r\n| | | | |-- _1gd.fnm\r\n| | | | |-- _1gd.si\r\n| | | | |-- _1gd_Lucene50_0.doc\r\n| | | | |-- _1gd_Lucene50_0.pos\r\n| | | | |-- _1gd_Lucene50_0.tim\r\n| | | | |-- _1gd_Lucene50_0.tip\r\n| | | | |-- _1gd_Lucene70_0.dvd\r\n| | | | |-- _1gd_Lucene70_0.dvm\r\n| | | | |-- _3pc.dii\r\n| | | | |-- _3pc.dim\r\n| | | | |-- _3pc.fdt\r\n| | | | |-- _3pc.fdx\r\n| | | | |-- _3pc.fnm\r\n| | | | |-- _3pc.si\r\n| | | | |-- _3pc_Lucene50_0.doc\r\n| | | | |-- _3pc_Lucene50_0.pos\r\n| | | | |-- _3pc_Lucene50_0.tim\r\n| | | | |-- _3pc_Lucene50_0.tip\r\n| | | | |-- _3pc_Lucene70_0.dvd\r\n| | | | |-- _3pc_Lucene70_0.dvm\r\n| | | | |-- _412.dii\r\n| | | | |-- _412.dim\r\n| | | | |-- _412.fdt\r\n| | | | |-- _412.fdx\r\n| | | | |-- _412.fnm\r\n| | | | |-- _412.si\r\n| | | | |-- _412_Lucene50_0.doc\r\n| | | | |-- _412_Lucene50_0.pos\r\n| | | | |-- _412_Lucene50_0.tim\r\n| | | | |-- _412_Lucene50_0.tip\r\n| | | | |-- _412_Lucene70_0.dvd\r\n| | | | |-- _412_Lucene70_0.dvm\r\n| | | | |-- _67o.dii\r\n| | | | |-- _67o.dim\r\n| | | | |-- _67o.fdt\r\n| | | | |-- _67o.fdx\r\n| | | | |-- _67o.fnm\r\n| | | | |-- _67o.si\r\n| | | | |-- _67o_Lucene50_0.doc\r\n| | | | |-- _67o_Lucene50_0.pos\r\n| | | | |-- _67o_Lucene50_0.tim\r\n| | | | |-- _67o_Lucene50_0.tip\r\n| | | | |-- _67o_Lucene70_0.dvd\r\n| | | | |-- _67o_Lucene70_0.dvm\r\n| | | | |-- _6hy.cfe\r\n| | | | |-- _6hy.cfs\r\n| | | | |-- _6hy.si\r\n| | | | |-- _6ii.cfe\r\n| | | | |-- _6ii.cfs\r\n| | | | |-- _6ii.si\r\n| | | | |-- _6p6.cfe\r\n| | | | |-- _6p6.cfs\r\n| | | | |-- _6p6.si\r\n| | | | |-- _6ss.cfe\r\n| | | | |-- _6ss.cfs\r\n| | | | |-- _6ss.si\r\n| | | | |-- _7hs.cfe\r\n| | | | |-- _7hs.cfs\r\n| | | | |-- _7hs.si\r\n| | | | |-- _7ic.cfe\r\n| | | | |-- _7ic.cfs\r\n| | | | |-- _7ic.si\r\n| | | | |-- _7im.cfe\r\n| | | | |-- _7im.cfs\r\n| | | | |-- _7im.si\r\n| | | | |-- _7iw.cfe\r\n| | | | |-- _7iw.cfs\r\n| | | | |-- _7iw.si\r\n| | | | |-- _lr.dii\r\n| | | | |-- _lr.dim\r\n| | | | |-- _lr.fdt\r\n| | | | |-- _lr.fdx\r\n| | | | |-- _lr.fnm\r\n| | | | |-- _lr.si\r\n| | | | |-- _lr_Lucene50_0.doc\r\n| | | | |-- _lr_Lucene50_0.pos\r\n| | | | |-- _lr_Lucene50_0.tim\r\n| | | | |-- _lr_Lucene50_0.tip\r\n| | | | |-- _lr_Lucene70_0.dvd\r\n| | | | |-- _lr_Lucene70_0.dvm\r\n| | | | |-- segments_11\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-23.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 1\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _1nm.dii\r\n| | | | |-- _1nm.dim\r\n| | | | |-- _1nm.fdt\r\n| | | | |-- _1nm.fdx\r\n| | | | |-- _1nm.fnm\r\n| | | | |-- _1nm.si\r\n| | | | |-- _1nm_Lucene50_0.doc\r\n| | | | |-- _1nm_Lucene50_0.pos\r\n| | | | |-- _1nm_Lucene50_0.tim\r\n| | | | |-- _1nm_Lucene50_0.tip\r\n| | | | |-- _1nm_Lucene70_0.dvd\r\n| | | | |-- _1nm_Lucene70_0.dvm\r\n| | | | |-- _3kw.dii\r\n| | | | |-- _3kw.dim\r\n| | | | |-- _3kw.fdt\r\n| | | | |-- _3kw.fdx\r\n| | | | |-- _3kw.fnm\r\n| | | | |-- _3kw.si\r\n| | | | |-- _3kw_Lucene50_0.doc\r\n| | | | |-- _3kw_Lucene50_0.pos\r\n| | | | |-- _3kw_Lucene50_0.tim\r\n| | | | |-- _3kw_Lucene50_0.tip\r\n| | | | |-- _3kw_Lucene70_0.dvd\r\n| | | | |-- _3kw_Lucene70_0.dvm\r\n| | | | |-- _43k.cfe\r\n| | | | |-- _43k.cfs\r\n| | | | |-- _43k.si\r\n| | | | |-- _5dy.dii\r\n| | | | |-- _5dy.dim\r\n| | | | |-- _5dy.fdt\r\n| | | | |-- _5dy.fdx\r\n| | | | |-- _5dy.fnm\r\n| | | | |-- _5dy.si\r\n| | | | |-- _5dy_Lucene50_0.doc\r\n| | | | |-- _5dy_Lucene50_0.pos\r\n| | | | |-- _5dy_Lucene50_0.tim\r\n| | | | |-- _5dy_Lucene50_0.tip\r\n| | | | |-- _5dy_Lucene70_0.dvd\r\n| | | | |-- _5dy_Lucene70_0.dvm\r\n| | | | |-- _5no.cfe\r\n| | | | |-- _5no.cfs\r\n| | | | |-- _5no.si\r\n| | | | |-- _660.cfe\r\n| | | | |-- _660.cfs\r\n| | | | |-- _660.si\r\n| | | | |-- _6d8.cfe\r\n| | | | |-- _6d8.cfs\r\n| | | | |-- _6d8.si\r\n| | | | |-- _6jc.cfe\r\n| | | | |-- _6jc.cfs\r\n| | | | |-- _6jc.si\r\n| | | | |-- _6l0.cfe\r\n| | | | |-- _6l0.cfs\r\n| | | | |-- _6l0.si\r\n| | | | |-- _6n8.cfe\r\n| | | | |-- _6n8.cfs\r\n| | | | |-- _6n8.si\r\n| | | | |-- _6tw.cfe\r\n| | | | |-- _6tw.cfs\r\n| | | | |-- _6tw.si\r\n| | | | |-- _7j6.cfe\r\n| | | | |-- _7j6.cfs\r\n| | | | |-- _7j6.si\r\n| | | | |-- _7j7.cfe\r\n| | | | |-- _7j7.cfs\r\n| | | | |-- _7j7.si\r\n| | | | |-- _7j8.cfe\r\n| | | | |-- _7j8.cfs\r\n| | | | |-- _7j8.si\r\n| | | | |-- _7j9.cfe\r\n| | | | |-- _7j9.cfs\r\n| | | | |-- _7j9.si\r\n| | | | |-- _7ja.cfe\r\n| | | | |-- _7ja.cfs\r\n| | | | |-- _7ja.si\r\n| | | | |-- _7jb.cfe\r\n| | | | |-- _7jb.cfs\r\n| | | | |-- _7jb.si\r\n| | | | |-- _7jc.cfe\r\n| | | | |-- _7jc.cfs\r\n| | | | |-- _7jc.si\r\n| | | | |-- _lh.dii\r\n| | | | |-- _lh.dim\r\n| | | | |-- _lh.fdt\r\n| | | | |-- _lh.fdx\r\n| | | | |-- _lh.fnm\r\n| | | | |-- _lh.si\r\n| | | | |-- _lh_Lucene50_0.doc\r\n| | | | |-- _lh_Lucene50_0.pos\r\n| | | | |-- _lh_Lucene50_0.tim\r\n| | | | |-- _lh_Lucene50_0.tip\r\n| | | | |-- _lh_Lucene70_0.dvd\r\n| | | | |-- _lh_Lucene70_0.dvm\r\n| | | | |-- segments_13\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-24.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 2\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _4as.dii\r\n| | | | |-- _4as.dim\r\n| | | | |-- _4as.fdt\r\n| | | | |-- _4as.fdx\r\n| | | | |-- _4as.fnm\r\n| | | | |-- _4as.si\r\n| | | | |-- _4as_Lucene50_0.doc\r\n| | | | |-- _4as_Lucene50_0.pos\r\n| | | | |-- _4as_Lucene50_0.tim\r\n| | | | |-- _4as_Lucene50_0.tip\r\n| | | | |-- _4as_Lucene70_0.dvd\r\n| | | | |-- _4as_Lucene70_0.dvm\r\n| | | | |-- _66a.cfe\r\n| | | | |-- _66a.cfs\r\n| | | | |-- _66a.si\r\n| | | | |-- _66k.dii\r\n| | | | |-- _66k.dim\r\n| | | | |-- _66k.fdt\r\n| | | | |-- _66k.fdx\r\n| | | | |-- _66k.fnm\r\n| | | | |-- _66k.si\r\n| | | | |-- _66k_Lucene50_0.doc\r\n| | | | |-- _66k_Lucene50_0.pos\r\n| | | | |-- _66k_Lucene50_0.tim\r\n| | | | |-- _66k_Lucene50_0.tip\r\n| | | | |-- _66k_Lucene70_0.dvd\r\n| | | | |-- _66k_Lucene70_0.dvm\r\n| | | | |-- _6ho.cfe\r\n| | | | |-- _6ho.cfs\r\n| | | | |-- _6ho.si\r\n| | | | |-- _6ph.cfe\r\n| | | | |-- _6ph.cfs\r\n| | | | |-- _6ph.si\r\n| | | | |-- _6pq.cfe\r\n| | | | |-- _6pq.cfs\r\n| | | | |-- _6pq.si\r\n| | | | |-- _6qa.cfe\r\n| | | | |-- _6qa.cfs\r\n| | | | |-- _6qa.si\r\n| | | | |-- _6u6.cfe\r\n| | | | |-- _6u6.cfs\r\n| | | | |-- _6u6.si\r\n| | | | |-- _7j6.cfe\r\n| | | | |-- _7j6.cfs\r\n| | | | |-- _7j6.si\r\n| | | | |-- _7jg.cfe\r\n| | | | |-- _7jg.cfs\r\n| | | | |-- _7jg.si\r\n| | | | |-- _7jq.cfe\r\n| | | | |-- _7jq.cfs\r\n| | | | |-- _7jq.si\r\n| | | | |-- _7k0.cfe\r\n| | | | |-- _7k0.cfs\r\n| | | | |-- _7k0.si\r\n| | | | |-- _lr.dii\r\n| | | | |-- _lr.dim\r\n| | | | |-- _lr.fdt\r\n| | | | |-- _lr.fdx\r\n| | | | |-- _lr.fnm\r\n| | | | |-- _lr.si\r\n| | | | |-- _lr_Lucene50_0.doc\r\n| | | | |-- _lr_Lucene50_0.pos\r\n| | | | |-- _lr_Lucene50_0.tim\r\n| | | | |-- _lr_Lucene50_0.tip\r\n| | | | |-- _lr_Lucene70_0.dvd\r\n| | | | |-- _lr_Lucene70_0.dvm\r\n| | | | |-- segments_11\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-23.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 3\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _1ms.dii\r\n| | | | |-- _1ms.dim\r\n| | | | |-- _1ms.fdt\r\n| | | | |-- _1ms.fdx\r\n| | | | |-- _1ms.fnm\r\n| | | | |-- _1ms.si\r\n| | | | |-- _1ms_Lucene50_0.doc\r\n| | | | |-- _1ms_Lucene50_0.pos\r\n| | | | |-- _1ms_Lucene50_0.tim\r\n| | | | |-- _1ms_Lucene50_0.tip\r\n| | | | |-- _1ms_Lucene70_0.dvd\r\n| | | | |-- _1ms_Lucene70_0.dvm\r\n| | | | |-- _301.dii\r\n| | | | |-- _301.dim\r\n| | | | |-- _301.fdt\r\n| | | | |-- _301.fdx\r\n| | | | |-- _301.fnm\r\n| | | | |-- _301.si\r\n| | | | |-- _301_Lucene50_0.doc\r\n| | | | |-- _301_Lucene50_0.pos\r\n| | | | |-- _301_Lucene50_0.tim\r\n| | | | |-- _301_Lucene50_0.tip\r\n| | | | |-- _301_Lucene70_0.dvd\r\n| | | | |-- _301_Lucene70_0.dvm\r\n| | | | |-- _42q.dii\r\n| | | | |-- _42q.dim\r\n| | | | |-- _42q.fdt\r\n| | | | |-- _42q.fdx\r\n| | | | |-- _42q.fnm\r\n| | | | |-- _42q.si\r\n| | | | |-- _42q_Lucene50_0.doc\r\n| | | | |-- _42q_Lucene50_0.pos\r\n| | | | |-- _42q_Lucene50_0.tim\r\n| | | | |-- _42q_Lucene50_0.tip\r\n| | | | |-- _42q_Lucene70_0.dvd\r\n| | | | |-- _42q_Lucene70_0.dvm\r\n| | | | |-- _4yy.dii\r\n| | | | |-- _4yy.dim\r\n| | | | |-- _4yy.fdt\r\n| | | | |-- _4yy.fdx\r\n| | | | |-- _4yy.fnm\r\n| | | | |-- _4yy.si\r\n| | | | |-- _4yy_Lucene50_0.doc\r\n| | | | |-- _4yy_Lucene50_0.pos\r\n| | | | |-- _4yy_Lucene50_0.tim\r\n| | | | |-- _4yy_Lucene50_0.tip\r\n| | | | |-- _4yy_Lucene70_0.dvd\r\n| | | | |-- _4yy_Lucene70_0.dvm\r\n| | | | |-- _69m.dii\r\n| | | | |-- _69m.dim\r\n| | | | |-- _69m.fdt\r\n| | | | |-- _69m.fdx\r\n| | | | |-- _69m.fnm\r\n| | | | |-- _69m.si\r\n| | | | |-- _69m_Lucene50_0.doc\r\n| | | | |-- _69m_Lucene50_0.pos\r\n| | | | |-- _69m_Lucene50_0.tim\r\n| | | | |-- _69m_Lucene50_0.tip\r\n| | | | |-- _69m_Lucene70_0.dvd\r\n| | | | |-- _69m_Lucene70_0.dvm\r\n| | | | |-- _6he.cfe\r\n| | | | |-- _6he.cfs\r\n| | | | |-- _6he.si\r\n| | | | |-- _6it.cfe\r\n| | | | |-- _6it.cfs\r\n| | | | |-- _6it.si\r\n| | | | |-- _6v1.cfe\r\n| | | | |-- _6v1.cfs\r\n| | | | |-- _6v1.si\r\n| | | | |-- _7go.cfe\r\n| | | | |-- _7go.cfs\r\n| | | | |-- _7go.si\r\n| | | | |-- _7h8.cfe\r\n| | | | |-- _7h8.cfs\r\n| | | | |-- _7h8.si\r\n| | | | |-- _7hi.cfe\r\n| | | | |-- _7hi.cfs\r\n| | | | |-- _7hi.si\r\n| | | | |-- _7hs.cfe\r\n| | | | |-- _7hs.cfs\r\n| | | | |-- _7hs.si\r\n| | | | |-- _7ht.cfe\r\n| | | | |-- _7ht.cfs\r\n| | | | |-- _7ht.si\r\n| | | | |-- _7hu.cfe\r\n| | | | |-- _7hu.cfs\r\n| | | | |-- _7hu.si\r\n| | | | |-- _7hv.cfe\r\n| | | | |-- _7hv.cfs\r\n| | | | |-- _7hv.si\r\n| | | | |-- _lh.dii\r\n| | | | |-- _lh.dim\r\n| | | | |-- _lh.fdt\r\n| | | | |-- _lh.fdx\r\n| | | | |-- _lh.fnm\r\n| | | | |-- _lh.si\r\n| | | | |-- _lh_Lucene50_0.doc\r\n| | | | |-- _lh_Lucene50_0.pos\r\n| | | | |-- _lh_Lucene50_0.tim\r\n| | | | |-- _lh_Lucene50_0.tip\r\n| | | | |-- _lh_Lucene70_0.dvd\r\n| | | | |-- _lh_Lucene70_0.dvm\r\n| | | | |-- segments_11\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-24.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 4\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _1dw.dii\r\n| | | | |-- _1dw.dim\r\n| | | | |-- _1dw.fdt\r\n| | | | |-- _1dw.fdx\r\n| | | | |-- _1dw.fnm\r\n| | | | |-- _1dw.si\r\n| | | | |-- _1dw_Lucene50_0.doc\r\n| | | | |-- _1dw_Lucene50_0.pos\r\n| | | | |-- _1dw_Lucene50_0.tim\r\n| | | | |-- _1dw_Lucene50_0.tip\r\n| | | | |-- _1dw_Lucene70_0.dvd\r\n| | | | |-- _1dw_Lucene70_0.dvm\r\n| | | | |-- _3k2.dii\r\n| | | | |-- _3k2.dim\r\n| | | | |-- _3k2.fdt\r\n| | | | |-- _3k2.fdx\r\n| | | | |-- _3k2.fnm\r\n| | | | |-- _3k2.si\r\n| | | | |-- _3k2_Lucene50_0.doc\r\n| | | | |-- _3k2_Lucene50_0.pos\r\n| | | | |-- _3k2_Lucene50_0.tim\r\n| | | | |-- _3k2_Lucene50_0.tip\r\n| | | | |-- _3k2_Lucene70_0.dvd\r\n| | | | |-- _3k2_Lucene70_0.dvm\r\n| | | | |-- _5no.dii\r\n| | | | |-- _5no.dim\r\n| | | | |-- _5no.fdt\r\n| | | | |-- _5no.fdx\r\n| | | | |-- _5no.fnm\r\n| | | | |-- _5no.si\r\n| | | | |-- _5no_Lucene50_0.doc\r\n| | | | |-- _5no_Lucene50_0.pos\r\n| | | | |-- _5no_Lucene50_0.tim\r\n| | | | |-- _5no_Lucene50_0.tip\r\n| | | | |-- _5no_Lucene70_0.dvd\r\n| | | | |-- _5no_Lucene70_0.dvm\r\n| | | | |-- _69m.cfe\r\n| | | | |-- _69m.cfs\r\n| | | | |-- _69m.si\r\n| | | | |-- _6jc.cfe\r\n| | | | |-- _6jc.cfs\r\n| | | | |-- _6jc.si\r\n| | | | |-- _7e6.cfe\r\n| | | | |-- _7e6.cfs\r\n| | | | |-- _7e6.si\r\n| | | | |-- _7g4.cfe\r\n| | | | |-- _7g4.cfs\r\n| | | | |-- _7g4.si\r\n| | | | |-- _7ge.cfe\r\n| | | | |-- _7ge.cfs\r\n| | | | |-- _7ge.si\r\n| | | | |-- _7go.cfe\r\n| | | | |-- _7go.cfs\r\n| | | | |-- _7go.si\r\n| | | | |-- _7gy.cfe\r\n| | | | |-- _7gy.cfs\r\n| | | | |-- _7gy.si\r\n| | | | |-- _7h8.cfe\r\n| | | | |-- _7h8.cfs\r\n| | | | |-- _7h8.si\r\n| | | | |-- _7hi.cfe\r\n| | | | |-- _7hi.cfs\r\n| | | | |-- _7hi.si\r\n| | | | |-- _7hj.cfe\r\n| | | | |-- _7hj.cfs\r\n| | | | |-- _7hj.si\r\n| | | | |-- _7hk.cfe\r\n| | | | |-- _7hk.cfs\r\n| | | | |-- _7hk.si\r\n| | | | |-- _7hl.cfe\r\n| | | | |-- _7hl.cfs\r\n| | | | |-- _7hl.si\r\n| | | | |-- _7hm.cfe\r\n| | | | |-- _7hm.cfs\r\n| | | | |-- _7hm.si\r\n| | | | |-- _7hn.cfe\r\n| | | | |-- _7hn.cfs\r\n| | | | |-- _7hn.si\r\n| | | | |-- _7ho.cfe\r\n| | | | |-- _7ho.cfs\r\n| | | | |-- _7ho.si\r\n| | | | |-- _7hp.cfe\r\n| | | | |-- _7hp.cfs\r\n| | | | |-- _7hp.si\r\n| | | | |-- _lh.dii\r\n| | | | |-- _lh.dim\r\n| | | | |-- _lh.fdt\r\n| | | | |-- _lh.fdx\r\n| | | | |-- _lh.fnm\r\n| | | | |-- _lh.si\r\n| | | | |-- _lh_Lucene50_0.doc\r\n| | | | |-- _lh_Lucene50_0.pos\r\n| | | | |-- _lh_Lucene50_0.tim\r\n| | | | |-- _lh_Lucene50_0.tip\r\n| | | | |-- _lh_Lucene70_0.dvd\r\n| | | | |-- _lh_Lucene70_0.dvm\r\n| | | | |-- segments_13\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-24.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-7.st\r\n| |-- n9XjMd4cSzGq5BAU1M8UMQ\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _28.dii\r\n| | | | |-- _28.dim\r\n| | | | |-- _28.fdt\r\n| | | | |-- _28.fdx\r\n| | | | |-- _28.fnm\r\n| | | | |-- _28.nvd\r\n| | | | |-- _28.nvm\r\n| | | | |-- _28.si\r\n| | | | |-- _28_Lucene50_0.doc\r\n| | | | |-- _28_Lucene50_0.pos\r\n| | | | |-- _28_Lucene50_0.tim\r\n| | | | |-- _28_Lucene50_0.tip\r\n| | | | |-- _28_Lucene70_0.dvd\r\n| | | | |-- _28_Lucene70_0.dvm\r\n| | | | |-- _29.cfe\r\n| | | | |-- _29.cfs\r\n| | | | |-- _29.si\r\n| | | | |-- _2a.cfe\r\n| | | | |-- _2a.cfs\r\n| | | | |-- _2a.si\r\n| | | | |-- _2b.cfe\r\n| | | | |-- _2b.cfs\r\n| | | | |-- _2b.si\r\n| | | | |-- _2c.cfe\r\n| | | | |-- _2c.cfs\r\n| | | | |-- _2c.si\r\n| | | | |-- _2d.cfe\r\n| | | | |-- _2d.cfs\r\n| | | | |-- _2d.si\r\n| | | | |-- _2e.cfe\r\n| | | | |-- _2e.cfs\r\n| | | | |-- _2e.si\r\n| | | | |-- _2f.cfe\r\n| | | | |-- _2f.cfs\r\n| | | | |-- _2f.si\r\n| | | | |-- segments_2\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.ckp\r\n| | | |-- translog-1.tlog\r\n| | | |-- translog-2.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 1\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _28.dii\r\n| | | | |-- _28.dim\r\n| | | | |-- _28.fdt\r\n| | | | |-- _28.fdx\r\n| | | | |-- _28.fnm\r\n| | | | |-- _28.nvd\r\n| | | | |-- _28.nvm\r\n| | | | |-- _28.si\r\n| | | | |-- _28_Lucene50_0.doc\r\n| | | | |-- _28_Lucene50_0.pos\r\n| | | | |-- _28_Lucene50_0.tim\r\n| | | | |-- _28_Lucene50_0.tip\r\n| | | | |-- _28_Lucene70_0.dvd\r\n| | | | |-- _28_Lucene70_0.dvm\r\n| | | | |-- _29.cfe\r\n| | | | |-- _29.cfs\r\n| | | | |-- _29.si\r\n| | | | |-- _2a.cfe\r\n| | | | |-- _2a.cfs\r\n| | | | |-- _2a.si\r\n| | | | |-- _2b.cfe\r\n| | | | |-- _2b.cfs\r\n| | | | |-- _2b.si\r\n| | | | |-- _2c.cfe\r\n| | | | |-- _2c.cfs\r\n| | | | |-- _2c.si\r\n| | | | |-- _2d.cfe\r\n| | | | |-- _2d.cfs\r\n| | | | |-- _2d.si\r\n| | | | |-- segments_2\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.ckp\r\n| | | |-- translog-1.tlog\r\n| | | |-- translog-2.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 2\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _28.dii\r\n| | | | |-- _28.dim\r\n| | | | |-- _28.fdt\r\n| | | | |-- _28.fdx\r\n| | | | |-- _28.fnm\r\n| | | | |-- _28.nvd\r\n| | | | |-- _28.nvm\r\n| | | | |-- _28.si\r\n| | | | |-- _28_Lucene50_0.doc\r\n| | | | |-- _28_Lucene50_0.pos\r\n| | | | |-- _28_Lucene50_0.tim\r\n| | | | |-- _28_Lucene50_0.tip\r\n| | | | |-- _28_Lucene70_0.dvd\r\n| | | | |-- _28_Lucene70_0.dvm\r\n| | | | |-- _29.cfe\r\n| | | | |-- _29.cfs\r\n| | | | |-- _29.si\r\n| | | | |-- _2a.cfe\r\n| | | | |-- _2a.cfs\r\n| | | | |-- _2a.si\r\n| | | | |-- _2b.cfe\r\n| | | | |-- _2b.cfs\r\n| | | | |-- _2b.si\r\n| | | | |-- _2c.cfe\r\n| | | | |-- _2c.cfs\r\n| | | | |-- _2c.si\r\n| | | | |-- _2d.cfe\r\n| | | | |-- _2d.cfs\r\n| | | | |-- _2d.si\r\n| | | | |-- segments_2\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.ckp\r\n| | | |-- translog-1.tlog\r\n| | | |-- translog-2.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 3\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _28.dii\r\n| | | | |-- _28.dim\r\n| | | | |-- _28.fdt\r\n| | | | |-- _28.fdx\r\n| | | | |-- _28.fnm\r\n| | | | |-- _28.nvd\r\n| | | | |-- _28.nvm\r\n| | | | |-- _28.si\r\n| | | | |-- _28_Lucene50_0.doc\r\n| | | | |-- _28_Lucene50_0.pos\r\n| | | | |-- _28_Lucene50_0.tim\r\n| | | | |-- _28_Lucene50_0.tip\r\n| | | | |-- _28_Lucene70_0.dvd\r\n| | | | |-- _28_Lucene70_0.dvm\r\n| | | | |-- _29.cfe\r\n| | | | |-- _29.cfs\r\n| | | | |-- _29.si\r\n| | | | |-- _2a.cfe\r\n| | | | |-- _2a.cfs\r\n| | | | |-- _2a.si\r\n| | | | |-- _2b.cfe\r\n| | | | |-- _2b.cfs\r\n| | | | |-- _2b.si\r\n| | | | |-- _2c.cfe\r\n| | | | |-- _2c.cfs\r\n| | | | |-- _2c.si\r\n| | | | |-- segments_2\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.ckp\r\n| | | |-- translog-1.tlog\r\n| | | |-- translog-2.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 4\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _28.dii\r\n| | | | |-- _28.dim\r\n| | | | |-- _28.fdt\r\n| | | | |-- _28.fdx\r\n| | | | |-- _28.fnm\r\n| | | | |-- _28.nvd\r\n| | | | |-- _28.nvm\r\n| | | | |-- _28.si\r\n| | | | |-- _28_Lucene50_0.doc\r\n| | | | |-- _28_Lucene50_0.pos\r\n| | | | |-- _28_Lucene50_0.tim\r\n| | | | |-- _28_Lucene50_0.tip\r\n| | | | |-- _28_Lucene70_0.dvd\r\n| | | | |-- _28_Lucene70_0.dvm\r\n| | | | |-- _29.cfe\r\n| | | | |-- _29.cfs\r\n| | | | |-- _29.si\r\n| | | | |-- _2a.cfe\r\n| | | | |-- _2a.cfs\r\n| | | | |-- _2a.si\r\n| | | | |-- _2b.cfe\r\n| | | | |-- _2b.cfs\r\n| | | | |-- _2b.si\r\n| | | | |-- _2c.cfe\r\n| | | | |-- _2c.cfs\r\n| | | | |-- _2c.si\r\n| | | | |-- segments_2\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-1.ckp\r\n| | | |-- translog-1.tlog\r\n| | | |-- translog-2.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-6.st\r\n| |-- njw3jiVnTNCT2sVS5BwrJQ\r\n| | |-- 1\r\n| | | |-- _state\r\n| | | | `-- state-8.st\r\n| | | |-- index\r\n| | | | |-- _h6k.dii\r\n| | | | |-- _h6k.dim\r\n| | | | |-- _h6k.fdt\r\n| | | | |-- _h6k.fdx\r\n| | | | |-- _h6k.fnm\r\n| | | | |-- _h6k.si\r\n| | | | |-- _h6k_Lucene50_0.doc\r\n| | | | |-- _h6k_Lucene50_0.pos\r\n| | | | |-- _h6k_Lucene50_0.tim\r\n| | | | |-- _h6k_Lucene50_0.tip\r\n| | | | |-- _h6k_Lucene70_0.dvd\r\n| | | | |-- _h6k_Lucene70_0.dvm\r\n| | | | |-- _hmy.cfe\r\n| | | | |-- _hmy.cfs\r\n| | | | |-- _hmy.si\r\n| | | | |-- _i1q.cfe\r\n| | | | |-- _i1q.cfs\r\n| | | | |-- _i1q.si\r\n| | | | |-- _io8.cfe\r\n| | | | |-- _io8.cfs\r\n| | | | |-- _io8.si\r\n| | | | |-- _j11.cfe\r\n| | | | |-- _j11.cfs\r\n| | | | |-- _j11.si\r\n| | | | |-- _j9n.cfe\r\n| | | | |-- _j9n.cfs\r\n| | | | |-- _j9n.si\r\n| | | | |-- _jmg.cfe\r\n| | | | |-- _jmg.cfs\r\n| | | | |-- _jmg.si\r\n| | | | |-- _jty.cfe\r\n| | | | |-- _jty.cfs\r\n| | | | |-- _jty.si\r\n| | | | |-- _jwq.cfe\r\n| | | | |-- _jwq.cfs\r\n| | | | |-- _jwq.si\r\n| | | | |-- _k0d.cfe\r\n| | | | |-- _k0d.cfs\r\n| | | | |-- _k0d.si\r\n| | | | |-- _k21.cfe\r\n| | | | |-- _k21.cfs\r\n| | | | |-- _k21.si\r\n| | | | |-- _k2l.cfe\r\n| | | | |-- _k2l.cfs\r\n| | | | |-- _k2l.si\r\n| | | | |-- _k36.cfe\r\n| | | | |-- _k36.cfs\r\n| | | | |-- _k36.si\r\n| | | | |-- _k53.cfe\r\n| | | | |-- _k53.cfs\r\n| | | | |-- _k53.si\r\n| | | | |-- _k7v.cfe\r\n| | | | |-- _k7v.cfs\r\n| | | | |-- _k7v.si\r\n| | | | |-- _k8z.cfe\r\n| | | | |-- _k8z.cfs\r\n| | | | |-- _k8z.si\r\n| | | | |-- _k94.cfe\r\n| | | | |-- _k94.cfs\r\n| | | | |-- _k94.si\r\n| | | | |-- _k9a.cfe\r\n| | | | |-- _k9a.cfs\r\n| | | | |-- _k9a.si\r\n| | | | |-- _k9i.cfe\r\n| | | | |-- _k9i.cfs\r\n| | | | |-- _k9i.si\r\n| | | | |-- _k9j.cfe\r\n| | | | |-- _k9j.cfs\r\n| | | | |-- _k9j.si\r\n| | | | |-- _k9k.cfe\r\n| | | | |-- _k9k.cfs\r\n| | | | |-- _k9k.si\r\n| | | | |-- _k9l.cfe\r\n| | | | |-- _k9l.cfs\r\n| | | | |-- _k9l.si\r\n| | | | |-- _k9m.cfe\r\n| | | | |-- _k9m.cfs\r\n| | | | |-- _k9m.si\r\n| | | | |-- segments_gb\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-181.ckp\r\n| | | |-- translog-181.tlog\r\n| | | |-- translog-182.ckp\r\n| | | |-- translog-182.tlog\r\n| | | |-- translog-183.ckp\r\n| | | |-- translog-183.tlog\r\n| | | |-- translog-184.ckp\r\n| | | |-- translog-184.tlog\r\n| | | |-- translog-185.ckp\r\n| | | |-- translog-185.tlog\r\n| | | |-- translog-186.ckp\r\n| | | |-- translog-186.tlog\r\n| | | |-- translog-187.ckp\r\n| | | |-- translog-187.tlog\r\n| | | |-- translog-188.ckp\r\n| | | |-- translog-188.tlog\r\n| | | |-- translog-189.ckp\r\n| | | |-- translog-189.tlog\r\n| | | |-- translog-190.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 2\r\n| | | |-- _state\r\n| | | | `-- state-4.st\r\n| | | |-- index\r\n| | | | |-- _76z.dii\r\n| | | | |-- _76z.dim\r\n| | | | |-- _76z.fdt\r\n| | | | |-- _76z.fdx\r\n| | | | |-- _76z.fnm\r\n| | | | |-- _76z.si\r\n| | | | |-- _76z_Lucene50_0.doc\r\n| | | | |-- _76z_Lucene50_0.pos\r\n| | | | |-- _76z_Lucene50_0.tim\r\n| | | | |-- _76z_Lucene50_0.tip\r\n| | | | |-- _76z_Lucene70_0.dvd\r\n| | | | |-- _76z_Lucene70_0.dvm\r\n| | | | |-- _d43.dii\r\n| | | | |-- _d43.dim\r\n| | | | |-- _d43.fdt\r\n| | | | |-- _d43.fdx\r\n| | | | |-- _d43.fnm\r\n| | | | |-- _d43.si\r\n| | | | |-- _d43_Lucene50_0.doc\r\n| | | | |-- _d43_Lucene50_0.pos\r\n| | | | |-- _d43_Lucene50_0.tim\r\n| | | | |-- _d43_Lucene50_0.tip\r\n| | | | |-- _d43_Lucene70_0.dvd\r\n| | | | |-- _d43_Lucene70_0.dvm\r\n| | | | |-- _fsj.dii\r\n| | | | |-- _fsj.dim\r\n| | | | |-- _fsj.fdt\r\n| | | | |-- _fsj.fdx\r\n| | | | |-- _fsj.fnm\r\n| | | | |-- _fsj.si\r\n| | | | |-- _fsj_Lucene50_0.doc\r\n| | | | |-- _fsj_Lucene50_0.pos\r\n| | | | |-- _fsj_Lucene50_0.tim\r\n| | | | |-- _fsj_Lucene50_0.tip\r\n| | | | |-- _fsj_Lucene70_0.dvd\r\n| | | | |-- _fsj_Lucene70_0.dvm\r\n| | | | |-- _gci.cfe\r\n| | | | |-- _gci.cfs\r\n| | | | |-- _gci.si\r\n| | | | |-- _gub.cfe\r\n| | | | |-- _gub.cfs\r\n| | | | |-- _gub.si\r\n| | | | |-- _hh2.cfe\r\n| | | | |-- _hh2.cfs\r\n| | | | |-- _hh2.si\r\n| | | | |-- _i27.cfe\r\n| | | | |-- _i27.cfs\r\n| | | | |-- _i27.si\r\n| | | | |-- _io4.cfe\r\n| | | | |-- _io4.cfs\r\n| | | | |-- _io4.si\r\n| | | | |-- _it5.cfe\r\n| | | | |-- _it5.cfs\r\n| | | | |-- _it5.si\r\n| | | | |-- _ivw.cfe\r\n| | | | |-- _ivw.cfs\r\n| | | | |-- _ivw.si\r\n| | | | |-- _iw6.cfe\r\n| | | | |-- _iw6.cfs\r\n| | | | |-- _iw6.si\r\n| | | | |-- _izt.cfe\r\n| | | | |-- _izt.cfs\r\n| | | | |-- _izt.si\r\n| | | | |-- _j17.cfe\r\n| | | | |-- _j17.cfs\r\n| | | | |-- _j17.si\r\n| | | | |-- _j2a.cfe\r\n| | | | |-- _j2a.cfs\r\n| | | | |-- _j2a.si\r\n| | | | |-- _j52.cfe\r\n| | | | |-- _j52.cfs\r\n| | | | |-- _j52.si\r\n| | | | |-- _j5n.cfe\r\n| | | | |-- _j5n.cfs\r\n| | | | |-- _j5n.si\r\n| | | | |-- _j66.cfe\r\n| | | | |-- _j66.cfs\r\n| | | | |-- _j66.si\r\n| | | | |-- _j6g.cfe\r\n| | | | |-- _j6g.cfs\r\n| | | | |-- _j6g.si\r\n| | | | |-- _j6r.cfe\r\n| | | | |-- _j6r.cfs\r\n| | | | |-- _j6r.si\r\n| | | | |-- _j70.cfe\r\n| | | | |-- _j70.cfs\r\n| | | | |-- _j70.si\r\n| | | | |-- _j7b.cfe\r\n| | | | |-- _j7b.cfs\r\n| | | | |-- _j7b.si\r\n| | | | |-- _j7j.cfe\r\n| | | | |-- _j7j.cfs\r\n| | | | |-- _j7j.si\r\n| | | | |-- _j7u.cfe\r\n| | | | |-- _j7u.cfs\r\n| | | | |-- _j7u.si\r\n| | | | |-- _j7w.cfe\r\n| | | | |-- _j7w.cfs\r\n| | | | |-- _j7w.si\r\n| | | | |-- _j84.cfe\r\n| | | | |-- _j84.cfs\r\n| | | | |-- _j84.si\r\n| | | | |-- _j85.cfe\r\n| | | | |-- _j85.cfs\r\n| | | | |-- _j85.si\r\n| | | | |-- _j86.cfe\r\n| | | | |-- _j86.cfs\r\n| | | | |-- _j86.si\r\n| | | | |-- _j87.cfe\r\n| | | | |-- _j87.cfs\r\n| | | | |-- _j87.si\r\n| | | | |-- _j88.cfe\r\n| | | | |-- _j88.cfs\r\n| | | | |-- _j88.si\r\n| | | | |-- segments_1k\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-181.ckp\r\n| | | |-- translog-181.tlog\r\n| | | |-- translog-182.ckp\r\n| | | |-- translog-182.tlog\r\n| | | |-- translog-183.ckp\r\n| | | |-- translog-183.tlog\r\n| | | |-- translog-184.ckp\r\n| | | |-- translog-184.tlog\r\n| | | |-- translog-185.ckp\r\n| | | |-- translog-185.tlog\r\n| | | |-- translog-186.ckp\r\n| | | |-- translog-186.tlog\r\n| | | |-- translog-187.ckp\r\n| | | |-- translog-187.tlog\r\n| | | |-- translog-188.ckp\r\n| | | |-- translog-188.tlog\r\n| | | |-- translog-189.ckp\r\n| | | |-- translog-189.tlog\r\n| | | |-- translog-190.tlog\r\n| | | `-- translog.ckp\r\n| | |-- 4\r\n| | | |-- _state\r\n| | | | `-- state-6.st\r\n| | | |-- index\r\n| | | | |-- _iop.dii\r\n| | | | |-- _iop.dim\r\n| | | | |-- _iop.fdt\r\n| | | | |-- _iop.fdx\r\n| | | | |-- _iop.fnm\r\n| | | | |-- _iop.si\r\n| | | | |-- _iop_Lucene50_0.doc\r\n| | | | |-- _iop_Lucene50_0.pos\r\n| | | | |-- _iop_Lucene50_0.tim\r\n| | | | |-- _iop_Lucene50_0.tip\r\n| | | | |-- _iop_Lucene70_0.dvd\r\n| | | | |-- _iop_Lucene70_0.dvm\r\n| | | | |-- _ixw.cfe\r\n| | | | |-- _ixw.cfs\r\n| | | | |-- _ixw.si\r\n| | | | |-- _j66.cfe\r\n| | | | |-- _j66.cfs\r\n| | | | |-- _j66.si\r\n| | | | |-- _jja.cfe\r\n| | | | |-- _jja.cfs\r\n| | | | |-- _jja.si\r\n| | | | |-- _jr1.cfe\r\n| | | | |-- _jr1.cfs\r\n| | | | |-- _jr1.si\r\n| | | | |-- _k81.cfe\r\n| | | | |-- _k81.cfs\r\n| | | | |-- _k81.si\r\n| | | | |-- _kq3.cfe\r\n| | | | |-- _kq3.cfs\r\n| | | | |-- _kq3.si\r\n| | | | |-- _kt6.cfe\r\n| | | | |-- _kt6.cfs\r\n| | | | |-- _kt6.si\r\n| | | | |-- _kxm.cfe\r\n| | | | |-- _kxm.cfs\r\n| | | | |-- _kxm.si\r\n| | | | |-- _l2d.cfe\r\n| | | | |-- _l2d.cfs\r\n| | | | |-- _l2d.si\r\n| | | | |-- _l8r.cfe\r\n| | | | |-- _l8r.cfs\r\n| | | | |-- _l8r.si\r\n| | | | |-- _lbj.cfe\r\n| | | | |-- _lbj.cfs\r\n| | | | |-- _lbj.si\r\n| | | | |-- _le1.cfe\r\n| | | | |-- _le1.cfs\r\n| | | | |-- _le1.si\r\n| | | | |-- _lga.cfe\r\n| | | | |-- _lga.cfs\r\n| | | | |-- _lga.si\r\n| | | | |-- _lhx.cfe\r\n| | | | |-- _lhx.cfs\r\n| | | | |-- _lhx.si\r\n| | | | |-- _ljm.cfe\r\n| | | | |-- _ljm.cfs\r\n| | | | |-- _ljm.si\r\n| | | | |-- _ljv.cfe\r\n| | | | |-- _ljv.cfs\r\n| | | | |-- _ljv.si\r\n| | | | |-- _lk6.cfe\r\n| | | | |-- _lk6.cfs\r\n| | | | |-- _lk6.si\r\n| | | | |-- _lkf.cfe\r\n| | | | |-- _lkf.cfs\r\n| | | | |-- _lkf.si\r\n| | | | |-- _lkp.cfe\r\n| | | | |-- _lkp.cfs\r\n| | | | |-- _lkp.si\r\n| | | | |-- _lkq.cfe\r\n| | | | |-- _lkq.cfs\r\n| | | | |-- _lkq.si\r\n| | | | |-- segments_cy\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-172.ckp\r\n| | | |-- translog-172.tlog\r\n| | | |-- translog-173.ckp\r\n| | | |-- translog-173.tlog\r\n| | | |-- translog-174.ckp\r\n| | | |-- translog-174.tlog\r\n| | | |-- translog-175.ckp\r\n| | | |-- translog-175.tlog\r\n| | | |-- translog-176.ckp\r\n| | | |-- translog-176.tlog\r\n| | | |-- translog-177.ckp\r\n| | | |-- translog-177.tlog\r\n| | | |-- translog-178.ckp\r\n| | | |-- translog-178.tlog\r\n| | | |-- translog-179.ckp\r\n| | | |-- translog-179.tlog\r\n| | | |-- translog-180.ckp\r\n| | | |-- translog-180.tlog\r\n| | | |-- translog-181.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-133.st\r\n| |-- vyJoAASjT8apt4J-WA2rSg\r\n| | |-- 0\r\n| | | |-- _state\r\n| | | | `-- state-0.st\r\n| | | |-- index\r\n| | | | |-- _1sq.dii\r\n| | | | |-- _1sq.dim\r\n| | | | |-- _1sq.fdt\r\n| | | | |-- _1sq.fdx\r\n| | | | |-- _1sq.fnm\r\n| | | | |-- _1sq.si\r\n| | | | |-- _1sq_Lucene50_0.doc\r\n| | | | |-- _1sq_Lucene50_0.tim\r\n| | | | |-- _1sq_Lucene50_0.tip\r\n| | | | |-- _1sq_Lucene70_0.dvd\r\n| | | | |-- _1sq_Lucene70_0.dvm\r\n| | | | |-- _1sr.cfe\r\n| | | | |-- _1sr.cfs\r\n| | | | |-- _1sr.si\r\n| | | | |-- _1ss.cfe\r\n| | | | |-- _1ss.cfs\r\n| | | | |-- _1ss.si\r\n| | | | |-- _1st.cfe\r\n| | | | |-- _1st.cfs\r\n| | | | |-- _1st.si\r\n| | | | |-- _1su.cfe\r\n| | | | |-- _1su.cfs\r\n| | | | |-- _1su.si\r\n| | | | |-- segments_4\r\n| | | | `-- write.lock\r\n| | | `-- translog\r\n| | | |-- translog-2.tlog\r\n| | | `-- translog.ckp\r\n| | `-- _state\r\n| | `-- state-2.st\r\n| `-- y4kO2RguRieheSUA2yUTgw\r\n| |-- 1\r\n| | |-- _state\r\n| | | `-- state-68.st\r\n| | |-- index\r\n| | | |-- _15j.dii\r\n| | | |-- _15j.dim\r\n| | | |-- _15j.fdt\r\n| | | |-- _15j.fdx\r\n| | | |-- _15j.fnm\r\n| | | |-- _15j.si\r\n| | | |-- _15j_Lucene50_0.doc\r\n| | | |-- _15j_Lucene50_0.pos\r\n| | | |-- _15j_Lucene50_0.tim\r\n| | | |-- _15j_Lucene50_0.tip\r\n| | | |-- _15j_Lucene70_0.dvd\r\n| | | |-- _15j_Lucene70_0.dvm\r\n| | | |-- _26r.cfe\r\n| | | |-- _26r.cfs\r\n| | | |-- _26r.si\r\n| | | |-- _2ta.cfe\r\n| | | |-- _2ta.cfs\r\n| | | |-- _2ta.si\r\n| | | |-- _2u4.cfe\r\n| | | |-- _2u4.cfs\r\n| | | |-- _2u4.si\r\n| | | |-- _2u5.cfe\r\n| | | |-- _2u5.cfs\r\n| | | |-- _2u5.si\r\n| | | |-- _2u6.cfe\r\n| | | |-- _2u6.cfs\r\n| | | |-- _2u6.si\r\n| | | |-- _2u7.cfe\r\n| | | |-- _2u7.cfs\r\n| | | |-- _2u7.si\r\n| | | |-- _2u8.cfe\r\n| | | |-- _2u8.cfs\r\n| | | |-- _2u8.si\r\n| | | |-- _2u9.cfe\r\n| | | |-- _2u9.cfs\r\n| | | |-- _2u9.si\r\n| | | |-- _2ua.cfe\r\n| | | |-- _2ua.cfs\r\n| | | |-- _2ua.si\r\n| | | |-- _h2.dii\r\n| | | |-- _h2.dim\r\n| | | |-- _h2.fdt\r\n| | | |-- _h2.fdx\r\n| | | |-- _h2.fnm\r\n| | | |-- _h2.si\r\n| | | |-- _h2_Lucene50_0.doc\r\n| | | |-- _h2_Lucene50_0.pos\r\n| | | |-- _h2_Lucene50_0.tim\r\n| | | |-- _h2_Lucene50_0.tip\r\n| | | |-- _h2_Lucene70_0.dvd\r\n| | | |-- _h2_Lucene70_0.dvm\r\n| | | |-- _s7.dii\r\n| | | |-- _s7.dim\r\n| | | |-- _s7.fdt\r\n| | | |-- _s7.fdx\r\n| | | |-- _s7.fnm\r\n| | | |-- _s7.si\r\n| | | |-- _s7_Lucene50_0.doc\r\n| | | |-- _s7_Lucene50_0.pos\r\n| | | |-- _s7_Lucene50_0.tim\r\n| | | |-- _s7_Lucene50_0.tip\r\n| | | |-- _s7_Lucene70_0.dvd\r\n| | | |-- _s7_Lucene70_0.dvm\r\n| | | |-- _uf.dii\r\n| | | |-- _uf.dim\r\n| | | |-- _uf.fdt\r\n| | | |-- _uf.fdx\r\n| | | |-- _uf.fnm\r\n| | | |-- _uf.si\r\n| | | |-- _uf_Lucene50_0.doc\r\n| | | |-- _uf_Lucene50_0.pos\r\n| | | |-- _uf_Lucene50_0.tim\r\n| | | |-- _uf_Lucene50_0.tip\r\n| | | |-- _uf_Lucene70_0.dvd\r\n| | | |-- _uf_Lucene70_0.dvm\r\n| | | |-- segments_6\r\n| | | `-- write.lock\r\n| | `-- translog\r\n| | |-- translog-1.tlog\r\n| | `-- translog.ckp\r\n| |-- 2\r\n| | |-- _state\r\n| | | `-- state-66.st\r\n| | |-- index\r\n| | | |-- _1kt.dii\r\n| | | |-- _1kt.dim\r\n| | | |-- _1kt.fdt\r\n| | | |-- _1kt.fdx\r\n| | | |-- _1kt.fnm\r\n| | | |-- _1kt.si\r\n| | | |-- _1kt_Lucene50_0.doc\r\n| | | |-- _1kt_Lucene50_0.pos\r\n| | | |-- _1kt_Lucene50_0.tim\r\n| | | |-- _1kt_Lucene50_0.tip\r\n| | | |-- _1kt_Lucene70_0.dvd\r\n| | | |-- _1kt_Lucene70_0.dvm\r\n| | | |-- _2w2.cfe\r\n| | | |-- _2w2.cfs\r\n| | | |-- _2w2.si\r\n| | | |-- _2wc.cfe\r\n| | | |-- _2wc.cfs\r\n| | | |-- _2wc.si\r\n| | | |-- _2wm.cfe\r\n| | | |-- _2wm.cfs\r\n| | | |-- _2wm.si\r\n| | | |-- _2ww.cfe\r\n| | | |-- _2ww.cfs\r\n| | | |-- _2ww.si\r\n| | | |-- _2x6.cfe\r\n| | | |-- _2x6.cfs\r\n| | | |-- _2x6.si\r\n| | | |-- _2x7.cfe\r\n| | | |-- _2x7.cfs\r\n| | | |-- _2x7.si\r\n| | | |-- _2x8.cfe\r\n| | | |-- _2x8.cfs\r\n| | | |-- _2x8.si\r\n| | | |-- _2x9.cfe\r\n| | | |-- _2x9.cfs\r\n| | | |-- _2x9.si\r\n| | | |-- _2xa.cfe\r\n| | | |-- _2xa.cfs\r\n| | | |-- _2xa.si\r\n| | | |-- _2xb.cfe\r\n| | | |-- _2xb.cfs\r\n| | | |-- _2xb.si\r\n| | | |-- _2xc.cfe\r\n| | | |-- _2xc.cfs\r\n| | | |-- _2xc.si\r\n| | | |-- _2xd.cfe\r\n| | | |-- _2xd.cfs\r\n| | | |-- _2xd.si\r\n| | | |-- _2xe.cfe\r\n| | | |-- _2xe.cfs\r\n| | | |-- _2xe.si\r\n| | | |-- _h1.dii\r\n| | | |-- _h1.dim\r\n| | | |-- _h1.fdt\r\n| | | |-- _h1.fdx\r\n| | | |-- _h1.fnm\r\n| | | |-- _h1.si\r\n| | | |-- _h1_Lucene50_0.doc\r\n| | | |-- _h1_Lucene50_0.pos\r\n| | | |-- _h1_Lucene50_0.tim\r\n| | | |-- _h1_Lucene50_0.tip\r\n| | | |-- _h1_Lucene70_0.dvd\r\n| | | |-- _h1_Lucene70_0.dvm\r\n| | | |-- segments_a\r\n| | | `-- write.lock\r\n| | `-- translog\r\n| | |-- translog-1.tlog\r\n| | `-- translog.ckp\r\n| |-- 3\r\n| | |-- _state\r\n| | | `-- state-68.st\r\n| | |-- index\r\n| | | |-- _2iz.cfe\r\n| | | |-- _2iz.cfs\r\n| | | |-- _2iz.si\r\n| | | |-- _2rm.dii\r\n| | | |-- _2rm.dim\r\n| | | |-- _2rm.fdt\r\n| | | |-- _2rm.fdx\r\n| | | |-- _2rm.fnm\r\n| | | |-- _2rm.si\r\n| | | |-- _2rm_Lucene50_0.doc\r\n| | | |-- _2rm_Lucene50_0.pos\r\n| | | |-- _2rm_Lucene50_0.tim\r\n| | | |-- _2rm_Lucene50_0.tip\r\n| | | |-- _2rm_Lucene70_0.dvd\r\n| | | |-- _2rm_Lucene70_0.dvm\r\n| | | |-- _2ta.cfe\r\n| | | |-- _2ta.cfs\r\n| | | |-- _2ta.si\r\n| | | |-- _2tk.cfe\r\n| | | |-- _2tk.cfs\r\n| | | |-- _2tk.si\r\n| | | |-- _2tu.cfe\r\n| | | |-- _2tu.cfs\r\n| | | |-- _2tu.si\r\n| | | |-- _2u4.cfe\r\n| | | |-- _2u4.cfs\r\n| | | |-- _2u4.si\r\n| | | |-- _2u5.cfe\r\n| | | |-- _2u5.cfs\r\n| | | |-- _2u5.si\r\n| | | |-- _2u6.cfe\r\n| | | |-- _2u6.cfs\r\n| | | |-- _2u6.si\r\n| | | |-- _2u7.cfe\r\n| | | |-- _2u7.cfs\r\n| | | |-- _2u7.si\r\n| | | |-- _2u8.cfe\r\n| | | |-- _2u8.cfs\r\n| | | |-- _2u8.si\r\n| | | |-- _2u9.cfe\r\n| | | |-- _2u9.cfs\r\n| | | |-- _2u9.si\r\n| | | |-- _2ua.cfe\r\n| | | |-- _2ua.cfs\r\n| | | |-- _2ua.si\r\n| | | |-- _2ub.cfe\r\n| | | |-- _2ub.cfs\r\n| | | |-- _2ub.si\r\n| | | |-- _2uc.cfe\r\n| | | |-- _2uc.cfs\r\n| | | |-- _2uc.si\r\n| | | |-- _u5.dii\r\n| | | |-- _u5.dim\r\n| | | |-- _u5.fdt\r\n| | | |-- _u5.fdx\r\n| | | |-- _u5.fnm\r\n| | | |-- _u5.si\r\n| | | |-- _u5_Lucene50_0.doc\r\n| | | |-- _u5_Lucene50_0.pos\r\n| | | |-- _u5_Lucene50_0.tim\r\n| | | |-- _u5_Lucene50_0.tip\r\n| | | |-- _u5_Lucene70_0.dvd\r\n| | | |-- _u5_Lucene70_0.dvm\r\n| | | |-- segments_6\r\n| | | `-- write.lock\r\n| | `-- translog\r\n| | |-- translog-1.tlog\r\n| | `-- translog.ckp\r\n| |-- 4\r\n| | |-- _state\r\n| | | `-- state-69.st\r\n| | |-- index\r\n| | | |-- _2cc.cfe\r\n| | | |-- _2cc.cfs\r\n| | | |-- _2cc.si\r\n| | | |-- _2hc.cfe\r\n| | | |-- _2hc.cfs\r\n| | | |-- _2hc.si\r\n| | | |-- _2tu.cfe\r\n| | | |-- _2tu.cfs\r\n| | | |-- _2tu.si\r\n| | | |-- _2uo.cfe\r\n| | | |-- _2uo.cfs\r\n| | | |-- _2uo.si\r\n| | | |-- _2up.cfe\r\n| | | |-- _2up.cfs\r\n| | | |-- _2up.si\r\n| | | |-- _2uq.cfe\r\n| | | |-- _2uq.cfs\r\n| | | |-- _2uq.si\r\n| | | |-- _gh.dii\r\n| | | |-- _gh.dim\r\n| | | |-- _gh.fdt\r\n| | | |-- _gh.fdx\r\n| | | |-- _gh.fnm\r\n| | | |-- _gh.si\r\n| | | |-- _gh_Lucene50_0.doc\r\n| | | |-- _gh_Lucene50_0.pos\r\n| | | |-- _gh_Lucene50_0.tim\r\n| | | |-- _gh_Lucene50_0.tip\r\n| | | |-- _gh_Lucene70_0.dvd\r\n| | | |-- _gh_Lucene70_0.dvm\r\n| | | |-- _s7.dii\r\n| | | |-- _s7.dim\r\n| | | |-- _s7.fdt\r\n| | | |-- _s7.fdx\r\n| | | |-- _s7.fnm\r\n| | | |-- _s7.si\r\n| | | |-- _s7_Lucene50_0.doc\r\n| | | |-- _s7_Lucene50_0.pos\r\n| | | |-- _s7_Lucene50_0.tim\r\n| | | |-- _s7_Lucene50_0.tip\r\n| | | |-- _s7_Lucene70_0.dvd\r\n| | | |-- _s7_Lucene70_0.dvm\r\n| | | |-- _up.dii\r\n| | | |-- _up.dim\r\n| | | |-- _up.fdt\r\n| | | |-- _up.fdx\r\n| | | |-- _up.fnm\r\n| | | |-- _up.si\r\n| | | |-- _up_Lucene50_0.doc\r\n| | | |-- _up_Lucene50_0.pos\r\n| | | |-- _up_Lucene50_0.tim\r\n| | | |-- _up_Lucene50_0.tip\r\n| | | |-- _up_Lucene70_0.dvd\r\n| | | |-- _up_Lucene70_0.dvm\r\n| | | |-- segments_6\r\n| | | `-- write.lock\r\n| | `-- translog\r\n| | |-- translog-1.ckp\r\n| | |-- translog-1.tlog\r\n| | |-- translog-2.tlog\r\n| | `-- translog.ckp\r\n| `-- _state\r\n| `-- state-398.st\r\n`-- node.lock\r\n\r\n196 directories, 3406 files\r\n```\r\n</pre>\r\n</details>", "created_at": "2017-09-07T14:30:22Z" }, { "body": "But yes, as I mentioned earlier, the cluster is healed and running... But restarting the containers triggers the issue mentioned in the first post again but the cluster self heals after say 3-5mins. I just keep calling `GET /_cluster/health` until it reaches 100% active shards with `green` status", "created_at": "2017-09-07T14:32:16Z" }, { "body": "Thanks, I put the output of `/_cat/indices` in `indices-in-cluster-state` and the output of `tree` in `indices-on-disk`. Then:\r\n\r\n```bash\r\n10:41:46 [jason:/tmp] $ comm <(cat indices-in-cluster-state | awk '{ print $4 }' | sort) <(grep --color=never \"^| |--\" indices-on-disk | perl -p -e 's/\\| \\|-- //g' | grep -v global | sort)\r\n\t\t6wASTQgvRtypYGI0JtSOYw\r\n\t\t819wJr7bT8GeP2Ezqmq4qQ\r\n\t\tBPCBp1G-ScmQIgMPjs6sXg\r\n\t\tF0Ycp-PESFWfV7J5p67c3g\r\n\tGCliM7kuQNyJPL2w8060nA\r\n\t\tGuTek3p2RjmR8jGaTe6Hrw\r\n\t\tLHo5QAgwSImYT-_WioKGkA\r\n\t\tR56iVv4FQT-ZmwGGGR8PyA\r\n\t\tYACFcRiJQLOvRHS5-EB2yg\r\n\teG48bcLuSv6V62EOz8dGYw\r\n\ti8wudibrTVWZOJTgqgMUgg\r\n\tn9XjMd4cSzGq5BAU1M8UMQ\r\n\t\tnjw3jiVnTNCT2sVS5BwrJQ\r\n\t\tvyJoAASjT8apt4J-WA2rSg\r\n```\r\n\r\nThis shows that you have four indices on disk that are not in your cluster state; presumably these are existing `.kibana` indices that can not be imported due to `GuTek3p2RjmR8jGaTe6Hrw` already existing. You can verify that these are `.kibana` indices by dumping the state file for them:\r\n\r\n```bash\r\nfor f in $(echo GCliM7kuQNyJPL2w8060nA eG48bcLuSv6V62EOz8dGYw i8wudibrTVWZOJTgqgMUgg n9XjMd4cSzGq5BAU1M8UMQ); do echo $f && cat /path/to/your/data/folder/nodes/0/indices/$f/_state/state-*.st | xxd; done\r\n```\r\n\r\nThe index name is four bytes after the smile byte marker (`:)`). I suspect that these will read `.kibana` for you. Can you verify that you do *not* have index tombstones for these indices via `GET /_cluster/state?filter_path=**.tombstones&pretty=true`?", "created_at": "2017-09-07T15:00:31Z" }, { "body": "Hey @jasontedor ,\r\n\r\nThank you for the response!\r\n\r\nBelow is the output of `GET /_cluster/state?filter_path=**.tombstones&pretty=true`. It's showing an index `logstash-2017.09.05` that I have manually deleted in an attempt to fix the bug.\r\n```\r\n{\r\n \"metadata\": {\r\n \"index-graveyard\": {\r\n \"tombstones\": [\r\n {\r\n \"index\": {\r\n \"index_name\": \"logstash-2017.09.05\",\r\n \"index_uuid\": \"JLVg0SJVS0SzAKKrV1ZHBw\"\r\n },\r\n \"delete_date_in_millis\": 1504730426591\r\n },\r\n {\r\n \"index\": {\r\n \"index_name\": \"logstash-2017.09.05\",\r\n \"index_uuid\": \"Dyp0Imk3RGekE_brCaQFdQ\"\r\n },\r\n \"delete_date_in_millis\": 1504730446529\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```", "created_at": "2017-09-07T15:53:12Z" }, { "body": "Were you able to verify that the ones I identified are indeed dangling `.kibana` indices?", "created_at": "2017-09-07T15:54:18Z" }, { "body": "I'm trying to run the command... but I don't have `xxd`?", "created_at": "2017-09-07T15:57:02Z" }, { "body": "I posit that you're on a Linux system then? Try replacing `xxd` by `hexdump -C`.", "created_at": "2017-09-07T15:58:42Z" }, { "body": "After `yum install vim-common`, I could run `xxd` and get the following.\r\n\r\n```\r\n[root@0ba9950003f1 nodes]# for f in $(echo GCliM7kuQNyJPL2w8060nA eG48bcLuSv6V62EOz8dGYw i8wudibrTVWZOJTgqgMUgg n9XjMd4cSzGq5BAU1M8UMQ); do echo $f && cat ./0/indices/$f/_state/state-*.st | xxd; done\r\nGCliM7kuQNyJPL2w8060nA\r\n0000000: 3fd7 6c17 0573 7461 7465 0000 0001 0000 ?.l..state......\r\n0000010: 0001 3a29 0a05 fa92 6c6f 6773 7461 7368 ..:)....logstash\r\n0000020: 2d32 3031 372e 3039 2e30 33fa 8676 6572 -2017.09.03..ver\r\n0000030: 7369 6f6e 2407 a691 726f 7574 696e 675f sion$...routing_\r\n0000040: 6e75 6d5f 7368 6172 6473 ca84 7374 6174 num_shards..stat\r\n0000050: 6543 6f70 656e 8773 6574 7469 6e67 73fa eCopen.settings.\r\n0000060: 9269 6e64 6578 2e63 7265 6174 696f 6e5f .index.creation_\r\n0000070: 6461 7465 4c31 3530 3433 3936 3830 3133 dateL15043968013\r\n0000080: 3432 9769 6e64 6578 2e6e 756d 6265 725f 42.index.number_\r\n0000090: 6f66 5f72 6570 6c69 6361 7340 3195 696e of_replicas@1.in\r\n00000a0: 6465 782e 6e75 6d62 6572 5f6f 665f 7368 dex.number_of_sh\r\n00000b0: 6172 6473 4035 9269 6e64 6578 2e70 726f ards@5.index.pro\r\n00000c0: 7669 6465 645f 6e61 6d65 526c 6f67 7374 vided_nameRlogst\r\n00000d0: 6173 682d 3230 3137 2e30 392e 3033 9569 ash-2017.09.03.i\r\n00000e0: 6e64 6578 2e72 6566 7265 7368 5f69 6e74 ndex.refresh_int\r\n00000f0: 6572 7661 6c41 3573 8969 6e64 6578 2e75 ervalA5s.index.u\r\n0000100: 7569 6455 4743 6c69 4d37 6b75 514e 794a uidUGCliM7kuQNyJ\r\n0000110: 504c 3277 3830 3630 6e41 9469 6e64 6578 PL2w8060nA.index\r\n0000120: 2e76 6572 7369 6f6e 2e63 7265 6174 6564 .version.created\r\n0000130: 4636 3030 3030 3237 fb87 6d61 7070 696e F6000027..mappin\r\n0000140: 6773 f8fd 06ab 4446 4c00 bc55 4d6f c320 gs....DFL..UMo.\r\n0000150: 0cfd 2f1c a79c 2a6d 879e fa3f a609 d1d4 ../...*m...?....\r\n0000160: 21a8 7c0d 48ba 28ea 7f9f 2124 4bd6 a9da !.|.H.(...!$K...\r\n0000170: a4b2 dcc0 8e9f 9ffd b047 220d f764 3f92 .........G\"..d?.\r\n0000180: d3a0 9912 350d a0ac 6401 f0f2 7524 0abc ....5...d...u$..\r\n0000190: 671c 6823 409e a29b 65a1 a58a 85ba 25fb g.h#@...e.....%.\r\n00001a0: d94a 2a92 6ef0 de5a a139 0d83 0534 fbe0 .J*.n..Z.9...4..\r\n00001b0: f094 ace9 3efe af8d 5318 ba61 d243 45b2 ....>...S..a.CE.\r\n00001c0: 6380 8f40 aed7 6b35 e67f 26bc 94d7 8cf5 c..@..k5..&.....\r\n00001d0: f417 94af dfcf 305c 8c4b a90b 8ee0 40d9 ......0\\.K....@.\r\n00001e0: d1f4 98dd eef9 65c1 9f9d 3083 fb09 be55 ......e...0....U\r\n00001f0: c43a 63c1 0511 0b34 9243 1058 a2c0 948d .:c....4.C.X....\r\n0000200: a74c e784 e523 18ea d083 f3c2 e895 6941 .L...#........iA\r\n0000210: aac8 1158 4815 dd04 6c8d 0fd8 084c 7009 ...XH...l....Lp.\r\n0000220: 97aa f33d b11f 1966 fc19 a4ba a51c 8b4c ...=...f.......L\r\n0000230: 4ac7 bfa5 fd60 0a91 446d b4ce 422b 5729 J....`..Dm..B+W)\r\n0000240: 0449 a917 6c46 edd8 4582 a3f9 a595 84e2 .I..lF..E.......\r\n0000250: 6044 d269 7eeb f844 83eb 00b5 b5d5 e0e4 `D.i~..D........\r\n0000260: 94c5 8407 ac36 4e04 11ba d35a 962d 930d .....6N....Z.-..\r\n0000270: 6da4 4115 4707 53a3 cb46 eb08 47ad 113a m.A.G.S..F..G..:\r\n0000280: db35 bf17 21b6 346a bf24 7f1c 7512 7a90 .5..!.4j.$..u.z.\r\n0000290: 2531 7ed5 46e4 aa8c e6e6 3f9a be01 da95 %1~.F.....?.....\r\n00002a0: a46e 9ac6 c3ba 8112 3946 6dbc 77e0 8638 .n......9Fm.w..8\r\n00002b0: 2869 1cfd 8bb2 66b3 379d ab8b 0ebc c0a6 (i....f.7.......\r\n00002c0: 1db7 5e37 8f1c a8f7 9640 afa8 48eb a714 ..^7.....@..H...\r\n00002d0: 3802 141f e88a 3ae0 dbd7 fde0 915e 115c 8.....:......^.\\\r\n00002e0: d367 9c83 9b62 4d12 c1ef 1300 00ff ff03 .g...bM.........\r\n00002f0: 00fd 0483 4446 4c00 9451 cb4e c340 0cfc ....DFL..Q.N.@..\r\n0000300: 171f 514e 4870 e8a9 ff81 d0ca 244e b262 ..QNHp......$N.b\r\n0000310: 5fda 750a 5194 7f67 926e 03bd 54e2 68cf _.u.Q..g.n..T.h.\r\n0000320: d833 1e2f 643a e979 726a e8b4 5037 07f6 .3./d:.yrj..P7..\r\n0000330: b635 2a3e 3956 2974 7a5b c84b 293c 88e9 .5*>9V)tz[.K)<..\r\n0000340: adb8 6ea3 25d6 d178 d676 a4d3 0da5 86f6 ..n.%..x.v......\r\n0000350: 0efa 29d9 3018 9d93 002e 9a51 ede8 dedf ..).0......Q....\r\n0000360: e643 cc1e ab7b 7645 1aaa 4495 6fa5 755d .C...{vE..D.o.u]\r\n0000370: 9ba5 ce5c f5c0 8383 aaf5 f41f 95df f14f ...\\...........O\r\n0000380: 99bf 62de addb 01e2 62f8 235e e0ee f9e5 ..b.....b.#^....\r\n0000390: f5d0 bf91 e0e0 b1c1 f786 528e 49b2 da2d ..........R.I..-\r\n00003a0: a085 ce6a 1191 b24f 5b55 cfe9 101f 61d5 ...j...O[U....a.\r\n00003b0: f922 b9d8 18fe 4087 5243 8344 bb4f d5e4 .\"....@.RC.D.O..\r\n00003c0: 1198 e649 70e6 bdc4 9554 57a3 c062 fcc7 ...Ip....TW..b..\r\n00003d0: ead4 e18a 0318 d9f5 a677 9111 2408 b105 .........w..$...\r\n00003e0: e54e 1972 2645 1b2a 1e86 471b f08b 75fd .N.r&E.*..G...u.\r\n00003f0: 0100 00ff ff03 00f9 8661 6c69 6173 6573 .........aliases\r\n0000400: fafb 8c70 7269 6d61 7279 5f74 6572 6d73 ...primary_terms\r\n0000410: f824 0280 2402 8224 0280 c6c6 f992 696e .$..$..$......in\r\n0000420: 5f73 796e 635f 616c 6c6f 6361 7469 6f6e _sync_allocation\r\n0000430: 73fa 8031 f855 4263 4c54 5a34 3067 5345 s..1.UBcLTZ40gSE\r\n0000440: 5746 5448 454a 7539 6a77 7641 554b 4c4a WFTHEJu9jwvAUKLJ\r\n0000450: 7150 484f 4c51 5175 4534 5a6e 5f79 4372 qPHOLQQuE4Zn_yCr\r\n0000460: 3537 67f9 8034 f855 4252 6a78 6e62 4f35 57g..4.UBRjxnbO5\r\n0000470: 5471 4344 6361 3279 302d 3656 4d67 5554 TqCDca2y0-6VMgUT\r\n0000480: 3363 386b 6135 3552 6b71 5565 7530 5a5a 3c8ka55RkqUeu0ZZ\r\n0000490: 5567 4c39 41f9 8032 f855 786f 6c46 3156 UgL9A..2.UxolF1V\r\n00004a0: 4a73 5232 2d32 682d 3272 7646 5261 3951 JsR2-2h-2rvFRa9Q\r\n00004b0: 552d 7265 646e 514f 3352 7469 5150 3058 U-rednQO3RtiQP0X\r\n00004c0: 484e 7a33 306a 41f9 8033 f855 4d49 6f6d HNz30jA..3.UMIom\r\n00004d0: 644a 5434 5241 3654 5150 526f 2d52 2d4e dJT4RA6TQPRo-R-N\r\n00004e0: 6541 5534 6551 574e 7242 3754 3343 4548 eAU4eQWNrB7T3CEH\r\n00004f0: 3973 5736 6464 4a33 77f9 8030 f855 3358 9sW6ddJ3w..0.U3X\r\n0000500: 4c61 4961 4744 526a 752d 5459 6e38 4175 LaIaGDRju-TYn8Au\r\n0000510: 315a 6b67 5549 6371 7178 7068 6353 5879 1ZkgUIcqqxphcSXy\r\n0000520: 6e68 4c45 3842 5775 4858 51f9 fbfb fbc0 nhLE8BWuHXQ.....\r\n0000530: 2893 e800 0000 0000 0000 0035 edb2 92 (..........5...\r\neG48bcLuSv6V62EOz8dGYw\r\ncat: ./0/indices/eG48bcLuSv6V62EOz8dGYw/_state/state-*.st: No such file or directory\r\ni8wudibrTVWZOJTgqgMUgg\r\ncat: ./0/indices/i8wudibrTVWZOJTgqgMUgg/_state/state-*.st: No such file or directory\r\nn9XjMd4cSzGq5BAU1M8UMQ\r\ncat: ./0/indices/n9XjMd4cSzGq5BAU1M8UMQ/_state/state-*.st: No such file or directory\r\n```", "created_at": "2017-09-07T16:01:04Z" }, { "body": "That does not make sense, your `tree` output shows `eG48bcLuSv6V62EOz8dGYw /_state/state-7.st`, `i8wudibrTVWZOJTgqgMUgg/_state/state-7.st`, and `n9XjMd4cSzGq5BAU1M8UMQ/_state/state-6.st` existing on disk. Have you since manipulated the data directory?", "created_at": "2017-09-07T16:07:20Z" }, { "body": "I never touched the data directory myself", "created_at": "2017-09-07T18:16:20Z" }, { "body": "Does it matter that I'm running on a 3 node cluster and the output I sent you was from one of the nodes, and not from the other two?", "created_at": "2017-09-07T18:18:05Z" } ], "number": 17435, "title": "Verbose dangling indices logging" }
{ "body": "Previously, we would determine index deletes in the cluster state by\ncomparing the index metadatas between the current cluster state and the\nprevious cluster state and decipher which ones were missing (the missing\nones are deleted indices). This led to a situation where a node that went \noffline and rejoined the cluster could potentially cause dangling indices to\nbe imported which should have been deleted, because when a node rejoins,\nits previous cluster state does not contain reliable state.\n\nThis commit introduces the notion of index tombstones in the cluster\nstate, where we are explicit about which indices have been deleted.\nIn the case where the previous cluster state is not useful for index metadata\ncomparisons, a node now determines which indices are to be deleted based\non these tombstones in the cluster state. There is also functionality to \npurge the tombstones after exceeding a certain amount. \n\nCloses #16358\nCloses #17435 \n", "number": 17265, "review_comments": [ { "body": "I think we should compare the previous tombstone and the current one and generate a delta. Don't try to be overly smart.\n", "created_at": "2016-03-23T08:47:58Z" }, { "body": "YAY\n", "created_at": "2016-03-23T13:53:47Z" }, { "body": "can we use `_` case instead of camelCase?\n", "created_at": "2016-03-23T13:54:21Z" }, { "body": "can this just be `Index` instead of uuid and name?\n", "created_at": "2016-03-23T13:55:14Z" }, { "body": "the builder can be private no? I mean others can just use a ctor? and then we also don't need getters but only setters? and they don't need to return `this` -- you can tell how much I like builders \n", "created_at": "2016-03-23T13:57:05Z" }, { "body": "these should all be real checks in the `ew IndexTombstone(indexUUID, indexName, deleteDate, clusterVersion);` ctor\n", "created_at": "2016-03-23T13:57:36Z" }, { "body": "do they need to be keyed by the index name? I think it can just be a List of tombstones?\n", "created_at": "2016-03-23T13:58:44Z" }, { "body": "I think we should just have one way of passing this? all the syntactic sugar is unneedd?\n", "created_at": "2016-03-23T14:00:39Z" }, { "body": "The issue is, if we use the builder to create an `IndexTombstone`, then it needs to be accessible to the `MetaDataDeleteIndexService` where it is created.\n", "created_at": "2016-03-23T14:25:58Z" }, { "body": "@bleskes I had to change the logic here, because in the case of a node restarting, its previous state will not contain the index metadata for an index that was deleted while it was offline, so if the index metadata for the deleted index (part of the tombstones in the cluster state) is not in the previous cluster state, I try to read it off disk. I had to introduce the `MetaStateService` as a dependency in this class in order to do that. If the index metadata could not also be read off of disk, that means the index was both created and deleted while the node was offline, so there is nothing to do. \n\nI am unsure if this is the best approach, so would appreciate your feedback.\n", "created_at": "2016-03-28T16:17:10Z" }, { "body": "@bleskes @s1monw I removed this test as it was failing for this PR, but I wanted to validate the logic with you before making a decision on how to proceed. In the new scenario of explicit index tombstones in the cluster state, I do not see why a dangling index should be imported in this case (hence, the reason I deleted the test for the time being). In this test, first an index named `test` is created and a document is indexed into it. This index will have some UUID, lets call it `X`. Then the non master node goes offline, and `test` with UUID `X` is deleted. Then, an index named `test2` is created with a UUID, lets call it `Y`, and this new index is assigned an alias of `test`. Then, the non master node is restarted. At this point, when it is restarted, it receives cluster state from master with a tombstone for index `test` of UUID `X`. So the non master node sees that it has this index metadata on disk and deletes it. Therefore, there is nothing to import at the dangling index stage, even when the alias for `test2` is dropped. I'm not sure why we would want that index to be imported as `test` in the first place, as it was explicitly deleted and only `test2` should exist.\n\nThat was my logic when thinking that this test is no longer valid, but I could well be wrong and am not sure, so would appreciate any validation either way.\n", "created_at": "2016-03-28T16:26:53Z" }, { "body": "The test seems to be there to make sure dangling indices are not imported if you have an alias with the same name as the index, but _are_ imported once the alias is removed. I think that's a valid test. The problem is in the way the test was creating a dangling index by delete one when a node is offline. This doesn't work anymore indeed (hurrah!) . We need to fix the test to work harder (change uuids on the index before importing?)\n", "created_at": "2016-03-28T20:07:08Z" }, { "body": "@bleskes The `IndicesClusterStateService#clusterChanged` (which is where deleted indices are applied) gets called before `GatewayMetaState#clusterChanged` (which is where process dangling indices happens). Now that we have the tombstones specifying an exact name/uuid index to delete, those tombstones will get processed and indices on disk will get deleted before any attempts at dangling indices. Its not possible for a dangling index with the same name as an alias to be laying around at the import dangling index phase unless the tombstones got wiped out from the cluster state, because that same index would have had to be deleted before assigning an alias to another index with the same name. So the only way it seems possible to have a dangling index that conflicts with an alias name is the following scenario:\n1. Master `M` and data node `D` are in the cluster.\n2. An index named `test` is created.\n3. `D` goes offline.\n4. `M` deletes index `test`.\n5. `M` goes offline and has its data directory wiped out.\n6. `M` restarts with no indices, no tombstones, and a brand new cluster UUID.\n7. `M` creates index `test2` with alias `test`.\n8. `D` starts back up, at which point it receives cluster state from master indicating there are no index tombstones (so nothing to delete), and when it tries to import the dangling indices, there is a naming conflict with the alias and it should fail. However, if we drop the alias, then we should be able to import the dangling index.\n\nSo perhaps thats the scenario I should test for?\n\nThis scenario also made me realize that we still have the same issue described in #16358 because if the CS gets wiped out and master no longer has the index tombstone for a shadow replica index, then the node that went offline will still try to import the dangling shadow replica index when it restarts, and encounter the same exception in trying to access the deleted shard data.\n", "created_at": "2016-03-28T21:51:18Z" }, { "body": "Yeah, I see what you mean. If the node is restarted while the index was deleted it may have no previous state. I think what you do is the right thing, but we can structure it in a slightly cleaner way:\n\n```\nif (idxService != null) {\n // delete in memory index\n deleteIndex(index, \"index no longer part of the metadata\");\n\n // ackNodeIndexDeleted\n} else if (previousState.metaData().hasIndex(index)) { < --- needs a variant the checks a uuid\n // deleted index which wasn't assigned to local node (the closed index is very misleading below) \n indicesService.deleteClosedIndex(\"closed index no longer part of the metadata\", metaData, event.state());\n // ack the index deletion \n} else if (indicesService.canDeleteIndex(Index)) { <-- which should also checks for the folder existence like \n // load metadata from file and delete it\n}\n\n```\n\nwdyt?\n", "created_at": "2016-03-29T14:02:21Z" }, { "body": "That makes sense, I will formulate the logic in that manner.\n", "created_at": "2016-03-29T14:12:54Z" }, { "body": "Only issue with \n\n`else if (indicesService.canDeleteIndex(Index))`\n\nis that `canDeleteIndexContents` requires the `IndexSettings`, which can't be created until the `IndexMetaData` is loaded, so all that logic will need to go in an `else` block..\n", "created_at": "2016-03-29T16:16:46Z" }, { "body": "@bleskes As we discussed earlier, I tried implementing a method that verifies and deletes index contents. However, you can see above that I had to load the `IndexMetaData` from disk. The reason is that the `deleteClosedIndex` method uses it. Tracing through the code, the index metadata is mainly used to get the `Index`, but in the following method, it uses the `IndexSettings`:\n\n`private void deleteIndexStore(String reason, Index index, IndexSettings indexSettings, boolean closed)`\n\nThis method does the `canDeleteIndexContents` check and then calls `nodeEnv#deleteIndexDirectorySafe`, which uses the `IndexSettings` to obtain locks for each shard and to see if there are any custom data dirs that need to be deleted. \n\nSo I'm not sure we can get away with not loading the `IndexMetaData` unless we forego a lot of those checks and just do a file level delete of the index folder only without any checks or locks.\n", "created_at": "2016-03-29T22:01:32Z" }, { "body": "nit: this can be a method reference..\n", "created_at": "2016-03-30T11:43:27Z" }, { "body": "can we make those statically configurable ? (no need for dynamic settings)\n", "created_at": "2016-03-30T11:45:32Z" }, { "body": "can we make this a TimeValue (and statically configurable ) ?\n", "created_at": "2016-03-30T11:45:52Z" }, { "body": "why not have a simple array?\n", "created_at": "2016-03-30T11:46:18Z" }, { "body": "Oh I see why - there is no builder. Can we follow the same pattern as in other places - make this class immutable and add a builder? All purging and such can be done at the builder level.\n", "created_at": "2016-03-30T11:54:15Z" }, { "body": "can we move all purging to the purge method, which will always be called by the builder? The down side will be not having logging, but if people want that they can put the cluster service in trace logging and see the changes.\n", "created_at": "2016-03-30T11:58:55Z" }, { "body": "always do these things with nano time. Also the logic here is the reverse - we want to keep up to 10K items, UNLESS they are too recent. Which makes me think that the default _minimum_ expiration time can be 48 hours.\n", "created_at": "2016-03-30T12:00:22Z" }, { "body": "this method can the max size and min expiration window as parameters, which will make it easier to test and have those be settings,\n", "created_at": "2016-03-30T12:00:56Z" }, { "body": "can we keep the same semantics of other metadata parts and no write the key here (but rather in metadata class)\n", "created_at": "2016-03-30T12:04:05Z" }, { "body": "can we add the serialization logic we need to the Index object it self? we're likely to use it in other places.\n", "created_at": "2016-03-30T12:06:51Z" }, { "body": "Can we add a custom type and have `indexGraveyard()` be a syntactic sugar for getting it? we will get a lot of things for free..\n", "created_at": "2016-03-30T12:09:38Z" }, { "body": "see comment about making purging explicit. Also we should add the purge method to MetaDataCreateIndexService as well?\n", "created_at": "2016-03-30T12:15:13Z" }, { "body": "let's keep this out of here - it's complex enough :) see my suggestion to do it on index creation and deletion.\n", "created_at": "2016-03-30T12:16:27Z" } ], "title": "Adds tombstones to cluster state for index deletions" }
{ "commits": [ { "message": "Adds tombstones to cluster state for index deletions\n\nPreviously, we would determine index deletes in the cluster state by\ncomparing the index metadatas between the current cluster state and the\nprevious cluster state and decipher which ones were missing (the missing\nones are deleted indices). This led to a situation where a node that\nwent offline and rejoined the cluster could potentially cause dangling\nindices to be imported which should have been deleted, because when a node\nrejoins, its previous cluster state does not contain reliable state.\n\nThis commit introduces the notion of index tombstones in the cluster\nstate, where we are explicit about which indices have been deleted.\nIn the case where the previous cluster state is not useful for index\nmetadata comparisons, a node now determines which indices are to be\ndeleted based on these tombstones in the cluster state. There is also\nfunctionality to purge the tombstones after exceeding a certain amount.\n\nCloses #17265\nCloses #16358\nCloses #17435" }, { "message": "Code review comment fixes" }, { "message": "Address code review comments" }, { "message": "Changed purge - called within the tombstone builder's build() method now" }, { "message": "Addressed more suggestions" }, { "message": "Addresses code review comments" }, { "message": "Made a variable final" } ], "files": [ { "diff": "@@ -21,14 +21,17 @@\n \n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.IndexGraveyard;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.gateway.GatewayService;\n import org.elasticsearch.index.Index;\n \n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.List;\n import java.util.Objects;\n+import java.util.stream.Collectors;\n \n /**\n * An event received by the local node, signaling that the cluster state has changed.\n@@ -122,28 +125,13 @@ public List<String> indicesCreated() {\n * Returns the indices deleted in this event\n */\n public List<Index> indicesDeleted() {\n- // If the new cluster state has a new cluster UUID, the likely scenario is that a node was elected\n- // master that has had its data directory wiped out, in which case we don't want to delete the indices and lose data;\n- // rather we want to import them as dangling indices instead. So we check here if the cluster UUID differs from the previous\n- // cluster UUID, in which case, we don't want to delete indices that the master erroneously believes shouldn't exist.\n- // See test DiscoveryWithServiceDisruptionsIT.testIndicesDeleted()\n- // See discussion on https://github.com/elastic/elasticsearch/pull/9952 and\n- // https://github.com/elastic/elasticsearch/issues/11665\n- if (metaDataChanged() == false || isNewCluster()) {\n- return Collections.emptyList();\n- }\n- List<Index> deleted = null;\n- for (ObjectCursor<IndexMetaData> cursor : previousState.metaData().indices().values()) {\n- IndexMetaData index = cursor.value;\n- IndexMetaData current = state.metaData().index(index.getIndex());\n- if (current == null) {\n- if (deleted == null) {\n- deleted = new ArrayList<>();\n- }\n- deleted.add(index.getIndex());\n- }\n+ if (previousState.blocks().hasGlobalBlock(GatewayService.STATE_NOT_RECOVERED_BLOCK)) {\n+ // working off of a non-initialized previous state, so use the tombstones for index deletions\n+ return indicesDeletedFromTombstones();\n+ } else {\n+ // examine the diffs in index metadata between the previous and new cluster states to get the deleted indices\n+ return indicesDeletedFromClusterState();\n }\n- return deleted == null ? Collections.<Index>emptyList() : deleted;\n }\n \n /**\n@@ -226,4 +214,43 @@ private boolean isNewCluster() {\n final String currClusterUUID = state.metaData().clusterUUID();\n return prevClusterUUID.equals(currClusterUUID) == false;\n }\n+\n+ // Get the deleted indices by comparing the index metadatas in the previous and new cluster states.\n+ // If an index exists in the previous cluster state, but not in the new cluster state, it must have been deleted.\n+ private List<Index> indicesDeletedFromClusterState() {\n+ // If the new cluster state has a new cluster UUID, the likely scenario is that a node was elected\n+ // master that has had its data directory wiped out, in which case we don't want to delete the indices and lose data;\n+ // rather we want to import them as dangling indices instead. So we check here if the cluster UUID differs from the previous\n+ // cluster UUID, in which case, we don't want to delete indices that the master erroneously believes shouldn't exist.\n+ // See test DiscoveryWithServiceDisruptionsIT.testIndicesDeleted()\n+ // See discussion on https://github.com/elastic/elasticsearch/pull/9952 and\n+ // https://github.com/elastic/elasticsearch/issues/11665\n+ if (metaDataChanged() == false || isNewCluster()) {\n+ return Collections.emptyList();\n+ }\n+ List<Index> deleted = null;\n+ for (ObjectCursor<IndexMetaData> cursor : previousState.metaData().indices().values()) {\n+ IndexMetaData index = cursor.value;\n+ IndexMetaData current = state.metaData().index(index.getIndex());\n+ if (current == null) {\n+ if (deleted == null) {\n+ deleted = new ArrayList<>();\n+ }\n+ deleted.add(index.getIndex());\n+ }\n+ }\n+ return deleted == null ? Collections.<Index>emptyList() : deleted;\n+ }\n+\n+ private List<Index> indicesDeletedFromTombstones() {\n+ // We look at the full tombstones list to see which indices need to be deleted. In the case of\n+ // a valid previous cluster state, indicesDeletedFromClusterState() will be used to get the deleted\n+ // list, so a diff doesn't make sense here. When a node (re)joins the cluster, its possible for it\n+ // to re-process the same deletes or process deletes about indices it never knew about. This is not\n+ // an issue because there are safeguards in place in the delete store operation in case the index\n+ // folder doesn't exist on the file system.\n+ List<IndexGraveyard.Tombstone> tombstones = state.metaData().indexGraveyard().getTombstones();\n+ return tombstones.stream().map(IndexGraveyard.Tombstone::getIndex).collect(Collectors.toList());\n+ }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/ClusterChangedEvent.java", "status": "modified" }, { "diff": "@@ -0,0 +1,467 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.metadata;\n+\n+import org.elasticsearch.cluster.Diff;\n+import org.elasticsearch.common.ParseField;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.settings.Setting;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.ObjectParser;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.Index;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.EnumSet;\n+import java.util.List;\n+import java.util.Objects;\n+import java.util.Set;\n+import java.util.concurrent.TimeUnit;\n+import java.util.function.BiFunction;\n+\n+/**\n+ * A collection of tombstones for explicitly marking indices as deleted in the cluster state.\n+ *\n+ * The cluster state contains a list of index tombstones for indices that have been\n+ * deleted in the cluster. Because cluster states are processed asynchronously by\n+ * nodes and a node could be removed from the cluster for a period of time, the\n+ * tombstones remain in the cluster state for a fixed period of time, after which\n+ * they are purged.\n+ */\n+final public class IndexGraveyard implements MetaData.Custom {\n+\n+ /**\n+ * Setting for the maximum tombstones allowed in the cluster state;\n+ * prevents the cluster state size from exploding too large, but it opens the\n+ * very unlikely risk that if there are greater than MAX_TOMBSTONES index\n+ * deletions while a node was offline, when it comes back online, it will have\n+ * missed index deletions that it may need to process.\n+ */\n+ public static final Setting<Integer> SETTING_MAX_TOMBSTONES = Setting.intSetting(\"cluster.indices.tombstones.size\",\n+ 500, // the default maximum number of tombstones\n+ Setting.Property.NodeScope);\n+\n+ public static final IndexGraveyard PROTO = new IndexGraveyard(new ArrayList<>());\n+ public static final String TYPE = \"index-graveyard\";\n+ private static final ParseField TOMBSTONES_FIELD = new ParseField(\"tombstones\");\n+ private static final ObjectParser<List<Tombstone>, Void> GRAVEYARD_PARSER;\n+ static {\n+ GRAVEYARD_PARSER = new ObjectParser<>(\"index_graveyard\", ArrayList::new);\n+ GRAVEYARD_PARSER.declareObjectArray(List::addAll, Tombstone.getParser(), TOMBSTONES_FIELD);\n+ }\n+\n+ private final List<Tombstone> tombstones;\n+\n+ private IndexGraveyard(final List<Tombstone> list) {\n+ assert list != null;\n+ tombstones = Collections.unmodifiableList(list);\n+ }\n+\n+ private IndexGraveyard(final StreamInput in) throws IOException {\n+ final int queueSize = in.readVInt();\n+ List<Tombstone> tombstones = new ArrayList<>(queueSize);\n+ for (int i = 0; i < queueSize; i++) {\n+ tombstones.add(new Tombstone(in));\n+ }\n+ this.tombstones = Collections.unmodifiableList(tombstones);\n+ }\n+\n+ public static IndexGraveyard fromStream(final StreamInput in) throws IOException {\n+ return new IndexGraveyard(in);\n+ }\n+\n+ @Override\n+ public String type() {\n+ return TYPE;\n+ }\n+\n+ @Override\n+ public EnumSet<MetaData.XContentContext> context() {\n+ return MetaData.API_AND_GATEWAY;\n+ }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ return (obj instanceof IndexGraveyard) && Objects.equals(tombstones, ((IndexGraveyard)obj).tombstones);\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return tombstones.hashCode();\n+ }\n+\n+ /**\n+ * Get the current unmodifiable index tombstone list.\n+ */\n+ public List<Tombstone> getTombstones() {\n+ return tombstones;\n+ }\n+\n+ @Override\n+ public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException {\n+ builder.startArray(TOMBSTONES_FIELD.getPreferredName());\n+ for (Tombstone tombstone : tombstones) {\n+ tombstone.toXContent(builder, params);\n+ }\n+ return builder.endArray();\n+ }\n+\n+ public IndexGraveyard fromXContent(final XContentParser parser) throws IOException {\n+ return new IndexGraveyard(GRAVEYARD_PARSER.parse(parser));\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return \"IndexGraveyard[\" + tombstones + \"]\";\n+ }\n+\n+ @Override\n+ public void writeTo(final StreamOutput out) throws IOException {\n+ out.writeVInt(tombstones.size());\n+ for (Tombstone tombstone : tombstones) {\n+ tombstone.writeTo(out);\n+ }\n+ }\n+\n+ @Override\n+ public IndexGraveyard readFrom(final StreamInput in) throws IOException {\n+ return new IndexGraveyard(in);\n+ }\n+\n+ @Override\n+ @SuppressWarnings(\"unchecked\")\n+ public Diff<MetaData.Custom> diff(final MetaData.Custom previous) {\n+ return new IndexGraveyardDiff((IndexGraveyard) previous, this);\n+ }\n+\n+ @Override\n+ public Diff<MetaData.Custom> readDiffFrom(final StreamInput in) throws IOException {\n+ return new IndexGraveyardDiff(in);\n+ }\n+\n+ public static IndexGraveyard.Builder builder() {\n+ return new IndexGraveyard.Builder();\n+ }\n+\n+ public static IndexGraveyard.Builder builder(final IndexGraveyard graveyard) {\n+ return new IndexGraveyard.Builder(graveyard);\n+ }\n+\n+ /**\n+ * A class to build an IndexGraveyard.\n+ */\n+ final public static class Builder {\n+ private List<Tombstone> tombstones;\n+ private int numPurged = -1;\n+ private final long currentTime = System.currentTimeMillis();\n+\n+ private Builder() {\n+ tombstones = new ArrayList<>();\n+ }\n+\n+ private Builder(IndexGraveyard that) {\n+ tombstones = new ArrayList<>(that.getTombstones());\n+ }\n+\n+ /**\n+ * A copy of the current tombstones in the builder.\n+ */\n+ public List<Tombstone> tombstones() {\n+ return Collections.unmodifiableList(tombstones);\n+ }\n+\n+ /**\n+ * Add a deleted index to the list of tombstones in the cluster state.\n+ */\n+ public Builder addTombstone(final Index index) {\n+ tombstones.add(new Tombstone(index, currentTime));\n+ return this;\n+ }\n+\n+ /**\n+ * Add a set of deleted indexes to the list of tombstones in the cluster state.\n+ */\n+ public Builder addTombstones(final Set<Index> indices) {\n+ indices.stream().forEach(this::addTombstone);\n+ return this;\n+ }\n+\n+ /**\n+ * Add a list of tombstones to the graveyard.\n+ */\n+ Builder addBuiltTombstones(final List<Tombstone> tombstones) {\n+ this.tombstones.addAll(tombstones);\n+ return this;\n+ }\n+\n+ /**\n+ * Get the number of tombstones that were purged. This should *only* be called\n+ * after build() has been called.\n+ */\n+ public int getNumPurged() {\n+ assert numPurged != -1;\n+ return numPurged;\n+ }\n+\n+ /**\n+ * Purge tombstone entries. Returns the number of entries that were purged.\n+ *\n+ * Tombstones are purged if the number of tombstones in the list\n+ * is greater than the input parameter of maximum allowed tombstones.\n+ * Tombstones are purged until the list is equal to the maximum allowed.\n+ */\n+ private int purge(final int maxTombstones) {\n+ int count = tombstones().size() - maxTombstones;\n+ if (count <= 0) {\n+ return 0;\n+ }\n+ tombstones = tombstones.subList(count, tombstones.size());\n+ return count;\n+ }\n+\n+ public IndexGraveyard build() {\n+ return build(Settings.EMPTY);\n+ }\n+\n+ public IndexGraveyard build(final Settings settings) {\n+ // first, purge the necessary amount of entries\n+ numPurged = purge(SETTING_MAX_TOMBSTONES.get(settings));\n+ return new IndexGraveyard(tombstones);\n+ }\n+ }\n+\n+ /**\n+ * A class representing a diff of two IndexGraveyard objects.\n+ */\n+ final public static class IndexGraveyardDiff implements Diff<MetaData.Custom> {\n+\n+ private final List<Tombstone> added;\n+ private final int removedCount;\n+\n+ IndexGraveyardDiff(final StreamInput in) throws IOException {\n+ added = Collections.unmodifiableList(in.readList((streamInput) -> new Tombstone(streamInput)));\n+ removedCount = in.readVInt();\n+ }\n+\n+ IndexGraveyardDiff(final IndexGraveyard previous, final IndexGraveyard current) {\n+ final List<Tombstone> previousTombstones = previous.tombstones;\n+ final List<Tombstone> currentTombstones = current.tombstones;\n+ final List<Tombstone> added;\n+ final int removed;\n+ if (previousTombstones.isEmpty()) {\n+ // nothing will have been removed, and all entries in current are new\n+ added = new ArrayList<>(currentTombstones);\n+ removed = 0;\n+ } else if (currentTombstones.isEmpty()) {\n+ // nothing will have been added, and all entries in previous are removed\n+ added = Collections.emptyList();\n+ removed = previousTombstones.size();\n+ } else {\n+ // look through the back, starting from the end, for added tombstones\n+ final Tombstone lastAddedTombstone = previousTombstones.get(previousTombstones.size() - 1);\n+ final int addedIndex = currentTombstones.lastIndexOf(lastAddedTombstone);\n+ if (addedIndex < currentTombstones.size()) {\n+ added = currentTombstones.subList(addedIndex + 1, currentTombstones.size());\n+ } else {\n+ added = Collections.emptyList();\n+ }\n+ // look from the front for the removed tombstones\n+ final Tombstone firstTombstone = currentTombstones.get(0);\n+ int idx = previousTombstones.indexOf(firstTombstone);\n+ if (idx < 0) {\n+ // the first tombstone in the current list wasn't found in the previous list,\n+ // which means all tombstones from the previous list have been deleted.\n+ assert added.equals(currentTombstones); // all previous are removed, so the current list must be the same as the added\n+ idx = previousTombstones.size();\n+ }\n+ removed = idx;\n+ }\n+ this.added = Collections.unmodifiableList(added);\n+ this.removedCount = removed;\n+ }\n+\n+ @Override\n+ public void writeTo(final StreamOutput out) throws IOException {\n+ out.writeList(added);\n+ out.writeVInt(removedCount);\n+ }\n+\n+ @Override\n+ public IndexGraveyard apply(final MetaData.Custom previous) {\n+ @SuppressWarnings(\"unchecked\") final IndexGraveyard old = (IndexGraveyard) previous;\n+ if (removedCount > old.tombstones.size()) {\n+ throw new IllegalStateException(\"IndexGraveyardDiff cannot remove [\" + removedCount + \"] entries from [\" +\n+ old.tombstones.size() + \"] tombstones.\");\n+ }\n+ final List<Tombstone> newTombstones = new ArrayList<>(old.tombstones.subList(removedCount, old.tombstones.size()));\n+ for (Tombstone tombstone : added) {\n+ newTombstones.add(tombstone);\n+ }\n+ return new IndexGraveyard.Builder().addBuiltTombstones(newTombstones).build();\n+ }\n+\n+ /** The index tombstones that were added between two states */\n+ public List<Tombstone> getAdded() {\n+ return added;\n+ }\n+\n+ /** The number of index tombstones that were removed between two states */\n+ public int getRemovedCount() {\n+ return removedCount;\n+ }\n+ }\n+\n+ /**\n+ * An individual tombstone entry for representing a deleted index.\n+ */\n+ final public static class Tombstone implements ToXContent, Writeable<Tombstone> {\n+\n+ private static final String INDEX_KEY = \"index\";\n+ private static final String DELETE_DATE_IN_MILLIS_KEY = \"delete_date_in_millis\";\n+ private static final String DELETE_DATE_KEY = \"delete_date\";\n+ private static final ObjectParser<Tombstone.Builder, Void> TOMBSTONE_PARSER;\n+ static {\n+ TOMBSTONE_PARSER = new ObjectParser<>(\"tombstoneEntry\", Tombstone.Builder::new);\n+ TOMBSTONE_PARSER.declareObject(Tombstone.Builder::index, Index::parseIndex, new ParseField(INDEX_KEY));\n+ TOMBSTONE_PARSER.declareLong(Tombstone.Builder::deleteDateInMillis, new ParseField(DELETE_DATE_IN_MILLIS_KEY));\n+ TOMBSTONE_PARSER.declareString((b, s) -> {}, new ParseField(DELETE_DATE_KEY));\n+ }\n+\n+ static BiFunction<XContentParser, Void, Tombstone> getParser() {\n+ return (p, c) -> TOMBSTONE_PARSER.apply(p, c).build();\n+ }\n+\n+ private final Index index;\n+ private final long deleteDateInMillis;\n+\n+ private Tombstone(final Index index, final long deleteDateInMillis) {\n+ Objects.requireNonNull(index);\n+ if (deleteDateInMillis < 0L) {\n+ throw new IllegalArgumentException(\"invalid deleteDateInMillis [\" + deleteDateInMillis + \"]\");\n+ }\n+ this.index = index;\n+ this.deleteDateInMillis = deleteDateInMillis;\n+ }\n+\n+ // create from stream\n+ private Tombstone(StreamInput in) throws IOException {\n+ index = new Index(in);\n+ deleteDateInMillis = in.readLong();\n+ }\n+\n+ /**\n+ * The deleted index.\n+ */\n+ public Index getIndex() {\n+ return index;\n+ }\n+\n+ /**\n+ * The date in milliseconds that the index deletion event occurred, used for logging/debugging.\n+ */\n+ public long getDeleteDateInMillis() {\n+ return deleteDateInMillis;\n+ }\n+\n+ @Override\n+ public void writeTo(final StreamOutput out) throws IOException {\n+ index.writeTo(out);\n+ out.writeLong(deleteDateInMillis);\n+ }\n+\n+ @Override\n+ public Tombstone readFrom(final StreamInput in) throws IOException {\n+ return new Tombstone(in);\n+ }\n+\n+ @Override\n+ public boolean equals(final Object other) {\n+ if (this == other) {\n+ return true;\n+ }\n+ if (other == null || getClass() != other.getClass()) {\n+ return false;\n+ }\n+ @SuppressWarnings(\"unchecked\") Tombstone that = (Tombstone) other;\n+ if (index.equals(that.index) == false) {\n+ return false;\n+ }\n+ if (deleteDateInMillis != that.deleteDateInMillis) {\n+ return false;\n+ }\n+ return true;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ int result = index.hashCode();\n+ result = 31 * result + Long.hashCode(deleteDateInMillis);\n+ return result;\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return \"[index=\" + index + \", deleteDate=\" + Joda.getStrictStandardDateFormatter().printer().print(deleteDateInMillis) + \"]\";\n+ }\n+\n+ @Override\n+ public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException {\n+ builder.startObject();\n+ builder.field(INDEX_KEY);\n+ index.toXContent(builder, params);\n+ builder.timeValueField(DELETE_DATE_IN_MILLIS_KEY, DELETE_DATE_KEY, deleteDateInMillis, TimeUnit.MILLISECONDS);\n+ return builder.endObject();\n+ }\n+\n+ public static Tombstone fromXContent(final XContentParser parser) throws IOException {\n+ return TOMBSTONE_PARSER.parse(parser, new Tombstone.Builder()).build();\n+ }\n+\n+ /**\n+ * A builder for building tombstone entries.\n+ */\n+ final private static class Builder {\n+ private Index index;\n+ private long deleteDateInMillis = -1L;\n+\n+ public void index(final Index index) {\n+ this.index = index;\n+ }\n+\n+ public void deleteDateInMillis(final long deleteDate) {\n+ this.deleteDateInMillis = deleteDate;\n+ }\n+\n+ public Tombstone build() {\n+ assert index != null;\n+ assert deleteDateInMillis > -1L;\n+ return new Tombstone(index, deleteDateInMillis);\n+ }\n+ }\n+ }\n+\n+}", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java", "status": "added" }, { "diff": "@@ -115,6 +115,7 @@ public interface Custom extends Diffable<Custom>, ToXContent {\n // register non plugin custom metadata\n registerPrototype(RepositoriesMetaData.TYPE, RepositoriesMetaData.PROTO);\n registerPrototype(IngestMetadata.TYPE, IngestMetadata.PROTO);\n+ registerPrototype(IndexGraveyard.TYPE, IndexGraveyard.PROTO);\n }\n \n /**\n@@ -175,7 +176,10 @@ public static <T extends Custom> T lookupPrototypeSafe(String type) {\n private final SortedMap<String, AliasOrIndex> aliasAndIndexLookup;\n \n @SuppressWarnings(\"unchecked\")\n- MetaData(String clusterUUID, long version, Settings transientSettings, Settings persistentSettings, ImmutableOpenMap<String, IndexMetaData> indices, ImmutableOpenMap<String, IndexTemplateMetaData> templates, ImmutableOpenMap<String, Custom> customs, String[] allIndices, String[] allOpenIndices, String[] allClosedIndices, SortedMap<String, AliasOrIndex> aliasAndIndexLookup) {\n+ MetaData(String clusterUUID, long version, Settings transientSettings, Settings persistentSettings,\n+ ImmutableOpenMap<String, IndexMetaData> indices, ImmutableOpenMap<String, IndexTemplateMetaData> templates,\n+ ImmutableOpenMap<String, Custom> customs, String[] allIndices, String[] allOpenIndices, String[] allClosedIndices,\n+ SortedMap<String, AliasOrIndex> aliasAndIndexLookup) {\n this.clusterUUID = clusterUUID;\n this.version = version;\n this.transientSettings = transientSettings;\n@@ -495,6 +499,13 @@ public ImmutableOpenMap<String, Custom> getCustoms() {\n return this.customs;\n }\n \n+ /**\n+ * The collection of index deletions in the cluster.\n+ */\n+ public IndexGraveyard indexGraveyard() {\n+ return custom(IndexGraveyard.TYPE);\n+ }\n+\n public <T extends Custom> T custom(String type) {\n return (T) customs.get(type);\n }\n@@ -609,7 +620,6 @@ private static class MetaDataDiff implements Diff<MetaData> {\n private Diff<ImmutableOpenMap<String, IndexTemplateMetaData>> templates;\n private Diff<ImmutableOpenMap<String, Custom>> customs;\n \n-\n public MetaDataDiff(MetaData before, MetaData after) {\n clusterUUID = after.clusterUUID;\n version = after.version;\n@@ -812,6 +822,7 @@ public Builder() {\n indices = ImmutableOpenMap.builder();\n templates = ImmutableOpenMap.builder();\n customs = ImmutableOpenMap.builder();\n+ indexGraveyard(IndexGraveyard.builder().build()); // create new empty index graveyard to initialize\n }\n \n public Builder(MetaData metaData) {\n@@ -914,6 +925,16 @@ public Builder customs(ImmutableOpenMap<String, Custom> customs) {\n return this;\n }\n \n+ public Builder indexGraveyard(final IndexGraveyard indexGraveyard) {\n+ putCustom(IndexGraveyard.TYPE, indexGraveyard);\n+ return this;\n+ }\n+\n+ public IndexGraveyard indexGraveyard() {\n+ @SuppressWarnings(\"unchecked\") IndexGraveyard graveyard = (IndexGraveyard) getCustom(IndexGraveyard.TYPE);\n+ return graveyard;\n+ }\n+\n public Builder updateSettings(Settings settings, String... indices) {\n if (indices == null || indices.length == 0) {\n indices = this.indices.keys().toArray(String.class);\n@@ -1031,7 +1052,8 @@ public MetaData build() {\n }\n }\n aliasAndIndexLookup = Collections.unmodifiableSortedMap(aliasAndIndexLookup);\n- return new MetaData(clusterUUID, version, transientSettings, persistentSettings, indices.build(), templates.build(), customs.build(), allIndices, allOpenIndices, allClosedIndices, aliasAndIndexLookup);\n+ return new MetaData(clusterUUID, version, transientSettings, persistentSettings, indices.build(), templates.build(),\n+ customs.build(), allIndices, allOpenIndices, allClosedIndices, aliasAndIndexLookup);\n }\n \n public static String toXContent(MetaData metaData) throws IOException {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java", "status": "modified" }, { "diff": "@@ -93,13 +93,21 @@ public ClusterState execute(final ClusterState currentState) {\n MetaData.Builder metaDataBuilder = MetaData.builder(meta);\n ClusterBlocks.Builder clusterBlocksBuilder = ClusterBlocks.builder().blocks(currentState.blocks());\n \n+ final IndexGraveyard.Builder graveyardBuilder = IndexGraveyard.builder(metaDataBuilder.indexGraveyard());\n+ final int previousGraveyardSize = graveyardBuilder.tombstones().size();\n for (final Index index : indices) {\n String indexName = index.getName();\n logger.debug(\"[{}] deleting index\", index);\n routingTableBuilder.remove(indexName);\n clusterBlocksBuilder.removeIndexBlocks(indexName);\n metaDataBuilder.remove(indexName);\n }\n+ // add tombstones to the cluster state for each deleted index\n+ final IndexGraveyard currentGraveyard = graveyardBuilder.addTombstones(indices).build(settings);\n+ metaDataBuilder.indexGraveyard(currentGraveyard); // the new graveyard set on the metadata\n+ logger.trace(\"{} tombstones purged from the cluster state. Previous tombstone size: {}. Current tombstone size: {}.\",\n+ graveyardBuilder.getNumPurged(), previousGraveyardSize, currentGraveyard.getTombstones().size());\n+\n // wait for events from all nodes that it has been removed from their respective metadata...\n int count = currentState.nodes().getSize();\n // add the notifications that the store was deleted from *data* nodes", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataDeleteIndexService.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.index.Index;\n \n import java.io.IOException;\n-import java.nio.file.Path;\n import java.util.ArrayList;\n import java.util.List;\n import java.util.function.Predicate;\n@@ -74,7 +73,7 @@ MetaData loadFullState() throws Exception {\n * Loads the index state for the provided index name, returning null if doesn't exists.\n */\n @Nullable\n- IndexMetaData loadIndexState(Index index) throws IOException {\n+ public IndexMetaData loadIndexState(Index index) throws IOException {\n return IndexMetaData.FORMAT.loadLatestState(logger, nodeEnv.indexPaths(index));\n }\n \n@@ -119,8 +118,10 @@ MetaData loadGlobalState() throws IOException {\n \n /**\n * Writes the index state.\n+ *\n+ * This method is public for testing purposes.\n */\n- void writeIndex(String reason, IndexMetaData indexMetaData) throws IOException {\n+ public void writeIndex(String reason, IndexMetaData indexMetaData) throws IOException {\n final Index index = indexMetaData.getIndex();\n logger.trace(\"[{}] writing state, reason [{}]\", index, reason);\n try {", "filename": "core/src/main/java/org/elasticsearch/gateway/MetaStateService.java", "status": "modified" }, { "diff": "@@ -20,33 +20,45 @@\n package org.elasticsearch.index;\n \n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.xcontent.ObjectParser;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n \n import java.io.IOException;\n+import java.util.Objects;\n \n /**\n- *\n+ * A value class representing the basic required properties of an Elasticsearch index.\n */\n-public class Index implements Writeable<Index> {\n+public class Index implements Writeable<Index>, ToXContent {\n \n public static final Index[] EMPTY_ARRAY = new Index[0];\n+ private static final String INDEX_UUID_KEY = \"index_uuid\";\n+ private static final String INDEX_NAME_KEY = \"index_name\";\n+ private static final ObjectParser<Builder, Void> INDEX_PARSER = new ObjectParser<>(\"index\", Index.Builder::new);\n+ static {\n+ INDEX_PARSER.declareString(Builder::name, new ParseField(INDEX_NAME_KEY));\n+ INDEX_PARSER.declareString(Builder::uuid, new ParseField(INDEX_UUID_KEY));\n+ }\n \n private final String name;\n private final String uuid;\n \n public Index(String name, String uuid) {\n- this.name = name.intern();\n- this.uuid = uuid.intern();\n+ this.name = Objects.requireNonNull(name).intern();\n+ this.uuid = Objects.requireNonNull(uuid).intern();\n }\n \n public Index(StreamInput in) throws IOException {\n this.name = in.readString();\n this.uuid = in.readString();\n }\n \n-\n public String getName() {\n return this.name;\n }\n@@ -96,4 +108,40 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeString(name);\n out.writeString(uuid);\n }\n+\n+ @Override\n+ public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException {\n+ builder.startObject();\n+ builder.field(INDEX_NAME_KEY, name);\n+ builder.field(INDEX_UUID_KEY, uuid);\n+ return builder.endObject();\n+ }\n+\n+ public static Index fromXContent(final XContentParser parser) throws IOException {\n+ return INDEX_PARSER.parse(parser, new Builder()).build();\n+ }\n+\n+ public static final Index parseIndex(XContentParser parser, Void context) {\n+ return INDEX_PARSER.apply(parser, context).build();\n+ }\n+\n+ /**\n+ * Builder for Index objects. Used by ObjectParser instances only.\n+ */\n+ final private static class Builder {\n+ private String name;\n+ private String uuid;\n+\n+ public void name(final String name) {\n+ this.name = name;\n+ }\n+\n+ public void uuid(final String uuid) {\n+ this.uuid = uuid;\n+ }\n+\n+ public Index build() {\n+ return new Index(name, uuid);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/Index.java", "status": "modified" }, { "diff": "@@ -60,6 +60,7 @@\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.env.ShardLock;\n import org.elasticsearch.gateway.MetaDataStateFormat;\n+import org.elasticsearch.gateway.MetaStateService;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexModule;\n import org.elasticsearch.index.IndexNotFoundException;\n@@ -149,6 +150,7 @@ public class IndicesService extends AbstractLifecycleComponent<IndicesService> i\n private final TimeValue cleanInterval;\n private final IndicesRequestCache indicesRequestCache;\n private final IndicesQueryCache indicesQueryCache;\n+ private final MetaStateService metaStateService;\n \n @Override\n protected void doStart() {\n@@ -161,7 +163,8 @@ public IndicesService(Settings settings, PluginsService pluginsService, NodeEnvi\n ClusterSettings clusterSettings, AnalysisRegistry analysisRegistry,\n IndicesQueriesRegistry indicesQueriesRegistry, IndexNameExpressionResolver indexNameExpressionResolver,\n ClusterService clusterService, MapperRegistry mapperRegistry, NamedWriteableRegistry namedWriteableRegistry,\n- ThreadPool threadPool, IndexScopedSettings indexScopedSettings, CircuitBreakerService circuitBreakerService) {\n+ ThreadPool threadPool, IndexScopedSettings indexScopedSettings, CircuitBreakerService circuitBreakerService,\n+ MetaStateService metaStateService) {\n super(settings);\n this.threadPool = threadPool;\n this.pluginsService = pluginsService;\n@@ -190,6 +193,7 @@ public void onRemoval(ShardId shardId, String fieldName, boolean wasEvicted, lon\n });\n this.cleanInterval = INDICES_CACHE_CLEAN_INTERVAL_SETTING.get(settings);\n this.cacheCleaner = new CacheCleaner(indicesFieldDataCache, indicesRequestCache, logger, threadPool, this.cleanInterval);\n+ this.metaStateService = metaStateService;\n }\n \n @Override\n@@ -562,13 +566,18 @@ void deleteIndexStore(String reason, IndexMetaData metaData, ClusterState cluste\n }\n \n private void deleteIndexStore(String reason, Index index, IndexSettings indexSettings) throws IOException {\n+ deleteIndexStoreIfDeletionAllowed(reason, index, indexSettings, DEFAULT_INDEX_DELETION_PREDICATE);\n+ }\n+\n+ private void deleteIndexStoreIfDeletionAllowed(final String reason, final Index index, final IndexSettings indexSettings,\n+ final IndexDeletionAllowedPredicate predicate) throws IOException {\n boolean success = false;\n try {\n // we are trying to delete the index store here - not a big deal if the lock can't be obtained\n // the store metadata gets wiped anyway even without the lock this is just best effort since\n // every shards deletes its content under the shard lock it owns.\n logger.debug(\"{} deleting index store reason [{}]\", index, reason);\n- if (canDeleteIndexContents(index, indexSettings)) {\n+ if (predicate.apply(index, indexSettings)) {\n // its safe to delete all index metadata and shard data\n nodeEnv.deleteIndexDirectorySafe(index, 0, indexSettings);\n }\n@@ -662,6 +671,41 @@ public boolean canDeleteIndexContents(Index index, IndexSettings indexSettings)\n return false;\n }\n \n+ /**\n+ * Verify that the contents on disk for the given index is deleted; if not, delete the contents.\n+ * This method assumes that an index is already deleted in the cluster state and/or explicitly\n+ * through index tombstones.\n+ * @param index {@code Index} to make sure its deleted from disk\n+ * @param clusterState {@code ClusterState} to ensure the index is not part of it\n+ * @return IndexMetaData for the index loaded from disk\n+ */\n+ @Nullable\n+ public IndexMetaData verifyIndexIsDeleted(final Index index, final ClusterState clusterState) {\n+ // this method should only be called when we know the index is not part of the cluster state\n+ if (clusterState.metaData().hasIndex(index.getName())) {\n+ throw new IllegalStateException(\"Cannot delete index [\" + index + \"], it is still part of the cluster state.\");\n+ }\n+ if (nodeEnv.hasNodeFile() && FileSystemUtils.exists(nodeEnv.indexPaths(index))) {\n+ final IndexMetaData metaData;\n+ try {\n+ metaData = metaStateService.loadIndexState(index);\n+ } catch (IOException e) {\n+ logger.warn(\"[{}] failed to load state file from a stale deleted index, folders will be left on disk\", e, index);\n+ return null;\n+ }\n+ final IndexSettings indexSettings = buildIndexSettings(metaData);\n+ try {\n+ deleteIndexStoreIfDeletionAllowed(\"stale deleted index\", index, indexSettings, ALWAYS_TRUE);\n+ } catch (IOException e) {\n+ // we just warn about the exception here because if deleteIndexStoreIfDeletionAllowed\n+ // throws an exception, it gets added to the list of pending deletes to be tried again\n+ logger.warn(\"[{}] failed to delete index on disk\", e, metaData.getIndex());\n+ }\n+ return metaData;\n+ }\n+ return null;\n+ }\n+\n /**\n * Returns <code>true</code> iff the shards content for the given shard can be deleted.\n * This method will return <code>false</code> if:\n@@ -1073,4 +1117,13 @@ public void onRemoval(RemovalNotification<IndicesRequestCache.Key, IndicesReques\n \n }\n \n+ @FunctionalInterface\n+ interface IndexDeletionAllowedPredicate {\n+ boolean apply(Index index, IndexSettings indexSettings);\n+ }\n+\n+ private final IndexDeletionAllowedPredicate DEFAULT_INDEX_DELETION_PREDICATE =\n+ (Index index, IndexSettings indexSettings) -> canDeleteIndexContents(index, indexSettings);\n+ private final IndexDeletionAllowedPredicate ALWAYS_TRUE = (Index index, IndexSettings indexSettings) -> true;\n+\n }", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -46,6 +46,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.Callback;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n+import org.elasticsearch.gateway.GatewayService;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.IndexSettings;\n@@ -233,15 +234,34 @@ private void applyDeletedIndices(final ClusterChangedEvent event) {\n if (idxService != null) {\n indexSettings = idxService.getIndexSettings();\n deleteIndex(index, \"index no longer part of the metadata\");\n- } else {\n- final IndexMetaData metaData = previousState.metaData().getIndexSafe(index);\n+ } else if (previousState.metaData().hasIndex(index.getName())) {\n+ // The deleted index was part of the previous cluster state, but not loaded on the local node\n+ final IndexMetaData metaData = previousState.metaData().index(index);\n indexSettings = new IndexSettings(metaData, settings);\n- indicesService.deleteUnassignedIndex(\"closed index no longer part of the metadata\", metaData, event.state());\n+ indicesService.deleteUnassignedIndex(\"deleted index was not assigned to local node\", metaData, event.state());\n+ } else {\n+ // The previous cluster state's metadata also does not contain the index,\n+ // which is what happens on node startup when an index was deleted while the\n+ // node was not part of the cluster. In this case, try reading the index\n+ // metadata from disk. If its not there, there is nothing to delete.\n+ // First, though, verify the precondition for applying this case by\n+ // asserting that the previous cluster state is not initialized/recovered.\n+ assert previousState.blocks().hasGlobalBlock(GatewayService.STATE_NOT_RECOVERED_BLOCK);\n+ final IndexMetaData metaData = indicesService.verifyIndexIsDeleted(index, event.state());\n+ if (metaData != null) {\n+ indexSettings = new IndexSettings(metaData, settings);\n+ } else {\n+ indexSettings = null;\n+ }\n }\n- try {\n- nodeIndexDeletedAction.nodeIndexDeleted(event.state(), index, indexSettings, localNodeId);\n- } catch (Throwable e) {\n- logger.debug(\"failed to send to master index {} deleted event\", e, index);\n+ // indexSettings can only be null if there was no IndexService and no metadata existed\n+ // on disk for this index, so it won't need to go through the node deleted action anyway\n+ if (indexSettings != null) {\n+ try {\n+ nodeIndexDeletedAction.nodeIndexDeleted(event.state(), index, indexSettings, localNodeId);\n+ } catch (Exception e) {\n+ logger.debug(\"failed to send to master index {} deleted event\", e, index);\n+ }\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java", "status": "modified" }, { "diff": "@@ -21,6 +21,8 @@\n \n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.block.ClusterBlocks;\n+import org.elasticsearch.cluster.metadata.IndexGraveyard;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -29,12 +31,16 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.DummyTransportAddress;\n+import org.elasticsearch.gateway.GatewayService;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n+import java.util.EnumSet;\n import java.util.HashSet;\n+import java.util.Iterator;\n import java.util.List;\n import java.util.Set;\n import java.util.stream.Collectors;\n@@ -47,13 +53,12 @@\n public class ClusterChangedEventTests extends ESTestCase {\n \n private static final ClusterName TEST_CLUSTER_NAME = new ClusterName(\"test\");\n- private static final int INDICES_CHANGE_NUM_TESTS = 5;\n private static final String NODE_ID_PREFIX = \"node_\";\n private static final String INITIAL_CLUSTER_ID = Strings.randomBase64UUID();\n // the initial indices which every cluster state test starts out with\n- private static final List<String> initialIndices = Arrays.asList(\"idx1\", \"idx2\", \"idx3\");\n- // index settings\n- private static final Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n+ private static final List<Index> initialIndices = Arrays.asList(new Index(\"idx1\", Strings.randomBase64UUID()),\n+ new Index(\"idx2\", Strings.randomBase64UUID()),\n+ new Index(\"idx3\", Strings.randomBase64UUID()));\n \n /**\n * Test basic properties of the ClusterChangedEvent class:\n@@ -105,20 +110,29 @@ public void testLocalNodeIsMaster() {\n \n /**\n * Test that the indices created and indices deleted lists between two cluster states\n- * are correct when there is no change in the cluster UUID. Also tests metadata equality\n- * between cluster states.\n+ * are correct when there is a change in indices added and deleted. Also tests metadata\n+ * equality between cluster states.\n */\n- public void testMetaDataChangesOnNoMasterChange() {\n- metaDataChangesCheck(false);\n+ public void testIndicesMetaDataChanges() {\n+ final int numNodesInCluster = 3;\n+ ClusterState previousState = createState(numNodesInCluster, randomBoolean(), initialIndices);\n+ for (TombstoneDeletionQuantity quantity : TombstoneDeletionQuantity.valuesInRandomizedOrder()) {\n+ final ClusterState newState = executeIndicesChangesTest(previousState, quantity);\n+ previousState = newState; // serves as the base cluster state for the next iteration\n+ }\n }\n \n /**\n- * Test that the indices created and indices deleted lists between two cluster states\n- * are correct when there is a change in the cluster UUID. Also tests metadata equality\n- * between cluster states.\n+ * Test that the indices deleted list is correct when the previous cluster state is\n+ * not initialized/recovered. This should trigger the use of the index tombstones to\n+ * determine the deleted indices.\n */\n- public void testMetaDataChangesOnNewClusterUUID() {\n- metaDataChangesCheck(true);\n+ public void testIndicesDeletionWithNotRecoveredState() {\n+ // test with all the various tombstone deletion quantities\n+ for (TombstoneDeletionQuantity quantity : TombstoneDeletionQuantity.valuesInRandomizedOrder()) {\n+ final ClusterState previousState = createNonInitializedState(randomIntBetween(3, 5), randomBoolean());\n+ executeIndicesChangesTest(previousState, quantity);\n+ }\n }\n \n /**\n@@ -131,15 +145,15 @@ public void testIndexMetaDataChange() {\n final ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", originalState, newState);\n \n // test when its not the same IndexMetaData\n- final String indexId = initialIndices.get(0);\n- final IndexMetaData originalIndexMeta = originalState.metaData().index(indexId);\n+ final Index index = initialIndices.get(0);\n+ final IndexMetaData originalIndexMeta = originalState.metaData().index(index);\n // make sure the metadata is actually on the cluster state\n- assertNotNull(\"IndexMetaData for \" + indexId + \" should exist on the cluster state\", originalIndexMeta);\n- IndexMetaData newIndexMeta = createIndexMetadata(indexId, originalIndexMeta.getVersion() + 1);\n+ assertNotNull(\"IndexMetaData for \" + index + \" should exist on the cluster state\", originalIndexMeta);\n+ IndexMetaData newIndexMeta = createIndexMetadata(index, originalIndexMeta.getVersion() + 1);\n assertTrue(\"IndexMetaData with different version numbers must be considered changed\", event.indexMetaDataChanged(newIndexMeta));\n \n // test when it doesn't exist\n- newIndexMeta = createIndexMetadata(\"doesntexist\");\n+ newIndexMeta = createIndexMetadata(new Index(\"doesntexist\", Strings.randomBase64UUID()));\n assertTrue(\"IndexMetaData that didn't previously exist should be considered changed\", event.indexMetaDataChanged(newIndexMeta));\n \n // test when its the same IndexMetaData\n@@ -176,7 +190,7 @@ public void testNodesAddedAndRemovedAndChanged() {\n \n // test when nodes both added and removed between cluster states\n // here we reuse the newState from the previous run which already added extra nodes\n- newState = nextState(newState, randomBoolean(), Collections.emptyList(), Collections.emptyList(), 1);\n+ newState = nextState(newState, Collections.emptyList(), Collections.emptyList(), 1);\n event = new ClusterChangedEvent(\"_na_\", newState, originalState);\n assertTrue(\"Nodes should have been removed between cluster states\", event.nodesRemoved());\n assertTrue(\"Nodes should have been added between cluster states\", event.nodesAdded());\n@@ -194,48 +208,27 @@ public void testRoutingTableChanges() {\n ClusterState newState = ClusterState.builder(originalState).build();\n ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", originalState, newState);\n assertFalse(\"routing tables should be the same object\", event.routingTableChanged());\n- assertFalse(\"index routing table should be the same object\", event.indexRoutingTableChanged(initialIndices.get(0)));\n+ assertFalse(\"index routing table should be the same object\", event.indexRoutingTableChanged(initialIndices.get(0).getName()));\n \n // routing tables and index routing tables aren't same object\n newState = createState(numNodesInCluster, randomBoolean(), initialIndices);\n event = new ClusterChangedEvent(\"_na_\", originalState, newState);\n assertTrue(\"routing tables should not be the same object\", event.routingTableChanged());\n- assertTrue(\"index routing table should not be the same object\", event.indexRoutingTableChanged(initialIndices.get(0)));\n+ assertTrue(\"index routing table should not be the same object\", event.indexRoutingTableChanged(initialIndices.get(0).getName()));\n \n // index routing tables are different because they don't exist\n newState = createState(numNodesInCluster, randomBoolean(), initialIndices.subList(1, initialIndices.size()));\n event = new ClusterChangedEvent(\"_na_\", originalState, newState);\n assertTrue(\"routing tables should not be the same object\", event.routingTableChanged());\n- assertTrue(\"index routing table should not be the same object\", event.indexRoutingTableChanged(initialIndices.get(0)));\n- }\n-\n- // Tests that the indices change list is correct as well as metadata equality when the metadata has changed.\n- private static void metaDataChangesCheck(final boolean changeClusterUUID) {\n- final int numNodesInCluster = 3;\n- for (int i = 0; i < INDICES_CHANGE_NUM_TESTS; i++) {\n- final ClusterState previousState = createState(numNodesInCluster, randomBoolean(), initialIndices);\n- final int numAdd = randomIntBetween(0, 5); // add random # of indices to the next cluster state\n- final int numDel = randomIntBetween(0, initialIndices.size()); // delete random # of indices from the next cluster state\n- final List<String> addedIndices = addIndices(numAdd);\n- final List<String> delIndices = delIndices(numDel, initialIndices);\n- final ClusterState newState = nextState(previousState, changeClusterUUID, addedIndices, delIndices, 0);\n- final ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", newState, previousState);\n- final List<String> addsFromEvent = event.indicesCreated();\n- final List<String> delsFromEvent = event.indicesDeleted().stream().map((s) -> s.getName()).collect(Collectors.toList());\n- Collections.sort(addsFromEvent);\n- Collections.sort(delsFromEvent);\n- assertThat(addsFromEvent, equalTo(addedIndices));\n- assertThat(delsFromEvent, changeClusterUUID ? equalTo(Collections.emptyList()) : equalTo(delIndices));\n- assertThat(event.metaDataChanged(), equalTo(changeClusterUUID || addedIndices.size() > 0 || delIndices.size() > 0));\n- }\n+ assertTrue(\"index routing table should not be the same object\", event.indexRoutingTableChanged(initialIndices.get(0).getName()));\n }\n \n private static ClusterState createSimpleClusterState() {\n return ClusterState.builder(TEST_CLUSTER_NAME).build();\n }\n \n // Create a basic cluster state with a given set of indices\n- private static ClusterState createState(final int numNodes, final boolean isLocalMaster, final List<String> indices) {\n+ private static ClusterState createState(final int numNodes, final boolean isLocalMaster, final List<Index> indices) {\n final MetaData metaData = createMetaData(indices);\n return ClusterState.builder(TEST_CLUSTER_NAME)\n .nodes(createDiscoveryNodes(numNodes, isLocalMaster))\n@@ -244,23 +237,29 @@ private static ClusterState createState(final int numNodes, final boolean isLoca\n .build();\n }\n \n+ // Create a non-initialized cluster state\n+ private static ClusterState createNonInitializedState(final int numNodes, final boolean isLocalMaster) {\n+ final ClusterState withoutBlock = createState(numNodes, isLocalMaster, Collections.emptyList());\n+ return ClusterState.builder(withoutBlock)\n+ .blocks(ClusterBlocks.builder().addGlobalBlock(GatewayService.STATE_NOT_RECOVERED_BLOCK).build())\n+ .build();\n+ }\n+\n // Create a modified cluster state from another one, but with some number of indices added and deleted.\n- private static ClusterState nextState(final ClusterState previousState, final boolean changeClusterUUID,\n- final List<String> addedIndices, final List<String> deletedIndices,\n- final int numNodesToRemove) {\n+ private static ClusterState nextState(final ClusterState previousState, final List<Index> addedIndices,\n+ final List<Index> deletedIndices, final int numNodesToRemove) {\n final ClusterState.Builder builder = ClusterState.builder(previousState);\n builder.stateUUID(Strings.randomBase64UUID());\n final MetaData.Builder metaBuilder = MetaData.builder(previousState.metaData());\n- if (changeClusterUUID || addedIndices.size() > 0 || deletedIndices.size() > 0) {\n- // there is some change in metadata cluster state\n- if (changeClusterUUID) {\n- metaBuilder.clusterUUID(Strings.randomBase64UUID());\n- }\n- for (String index : addedIndices) {\n+ if (addedIndices.size() > 0 || deletedIndices.size() > 0) {\n+ for (Index index : addedIndices) {\n metaBuilder.put(createIndexMetadata(index), true);\n }\n- for (String index : deletedIndices) {\n- metaBuilder.remove(index);\n+ for (Index index : deletedIndices) {\n+ metaBuilder.remove(index.getName());\n+ IndexGraveyard.Builder graveyardBuilder = IndexGraveyard.builder(metaBuilder.indexGraveyard());\n+ graveyardBuilder.addTombstone(index);\n+ metaBuilder.indexGraveyard(graveyardBuilder.build());\n }\n builder.metaData(metaBuilder);\n }\n@@ -272,6 +271,7 @@ private static ClusterState nextState(final ClusterState previousState, final bo\n }\n builder.nodes(nodesBuilder);\n }\n+ builder.blocks(ClusterBlocks.builder().build());\n return builder.build();\n }\n \n@@ -320,23 +320,26 @@ private static DiscoveryNode newNode(final String nodeId, Set<DiscoveryNode.Role\n }\n \n // Create the metadata for a cluster state.\n- private static MetaData createMetaData(final List<String> indices) {\n+ private static MetaData createMetaData(final List<Index> indices) {\n final MetaData.Builder builder = MetaData.builder();\n builder.clusterUUID(INITIAL_CLUSTER_ID);\n- for (String index : indices) {\n+ for (Index index : indices) {\n builder.put(createIndexMetadata(index), true);\n }\n return builder.build();\n }\n \n // Create the index metadata for a given index.\n- private static IndexMetaData createIndexMetadata(final String index) {\n+ private static IndexMetaData createIndexMetadata(final Index index) {\n return createIndexMetadata(index, 1);\n }\n \n // Create the index metadata for a given index, with the specified version.\n- private static IndexMetaData createIndexMetadata(final String index, final long version) {\n- return IndexMetaData.builder(index)\n+ private static IndexMetaData createIndexMetadata(final Index index, final long version) {\n+ final Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .build();\n+ return IndexMetaData.builder(index.getName())\n .settings(settings)\n .numberOfShards(1)\n .numberOfReplicas(0)\n@@ -355,21 +358,73 @@ private static RoutingTable createRoutingTable(final long version, final MetaDat\n }\n \n // Create a list of indices to add\n- private static List<String> addIndices(final int numIndices) {\n- final List<String> list = new ArrayList<>();\n+ private static List<Index> addIndices(final int numIndices, final String id) {\n+ final List<Index> list = new ArrayList<>();\n for (int i = 0; i < numIndices; i++) {\n- list.add(\"newIdx_\" + i);\n+ list.add(new Index(\"newIdx_\" + id + \"_\" + i, Strings.randomBase64UUID()));\n }\n return list;\n }\n \n // Create a list of indices to delete from a list that already belongs to a particular cluster state.\n- private static List<String> delIndices(final int numIndices, final List<String> currIndices) {\n- final List<String> list = new ArrayList<>();\n+ private static List<Index> delIndices(final int numIndices, final List<Index> currIndices) {\n+ final List<Index> list = new ArrayList<>();\n for (int i = 0; i < numIndices; i++) {\n list.add(currIndices.get(i));\n }\n return list;\n }\n \n+ // execute the indices changes test by generating random index additions and deletions and\n+ // checking the values on the cluster changed event.\n+ private static ClusterState executeIndicesChangesTest(final ClusterState previousState,\n+ final TombstoneDeletionQuantity deletionQuantity) {\n+ final int numAdd = randomIntBetween(0, 5); // add random # of indices to the next cluster state\n+ final List<Index> stateIndices = new ArrayList<>();\n+ for (Iterator<IndexMetaData> iter = previousState.metaData().indices().valuesIt(); iter.hasNext();) {\n+ stateIndices.add(iter.next().getIndex());\n+ }\n+ final int numDel;\n+ switch (deletionQuantity) {\n+ case DELETE_ALL: {\n+ numDel = stateIndices.size();\n+ break;\n+ }\n+ case DELETE_NONE: {\n+ numDel = 0;\n+ break;\n+ }\n+ case DELETE_RANDOM: {\n+ numDel = randomIntBetween(0, Math.max(stateIndices.size() - 1, 0));\n+ break;\n+ }\n+ default: throw new AssertionError(\"Unhandled mode [\" + deletionQuantity + \"]\");\n+ }\n+ final List<Index> addedIndices = addIndices(numAdd, randomAsciiOfLengthBetween(5, 10));\n+ List<Index> delIndices = delIndices(numDel, stateIndices);\n+ final ClusterState newState = nextState(previousState, addedIndices, delIndices, 0);\n+ ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", newState, previousState);\n+ final List<String> addsFromEvent = event.indicesCreated();\n+ List<Index> delsFromEvent = event.indicesDeleted();\n+ assertThat(new HashSet<>(addsFromEvent), equalTo(addedIndices.stream().map(Index::getName).collect(Collectors.toSet())));\n+ assertThat(new HashSet<>(delsFromEvent), equalTo(new HashSet<>(delIndices)));\n+ assertThat(event.metaDataChanged(), equalTo(addedIndices.size() > 0 || delIndices.size() > 0));\n+ final IndexGraveyard newGraveyard = event.state().metaData().indexGraveyard();\n+ final IndexGraveyard oldGraveyard = event.previousState().metaData().indexGraveyard();\n+ assertThat(((IndexGraveyard.IndexGraveyardDiff)newGraveyard.diff(oldGraveyard)).getAdded().size(), equalTo(delIndices.size()));\n+ return newState;\n+ }\n+\n+ private enum TombstoneDeletionQuantity {\n+ DELETE_RANDOM, // delete a random number of tombstones from cluster state (not zero and not all)\n+ DELETE_NONE, // delete none of the tombstones from cluster state\n+ DELETE_ALL; // delete all tombstones from cluster state\n+\n+ static List<TombstoneDeletionQuantity> valuesInRandomizedOrder() {\n+ final List<TombstoneDeletionQuantity> randomOrderQuantities = new ArrayList<>(EnumSet.allOf(TombstoneDeletionQuantity.class));\n+ Collections.shuffle(randomOrderQuantities, random());\n+ return randomOrderQuantities;\n+ }\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/ClusterChangedEventTests.java", "status": "modified" }, { "diff": "@@ -24,6 +24,8 @@\n import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n+import org.elasticsearch.cluster.metadata.IndexGraveyard;\n+import org.elasticsearch.cluster.metadata.IndexGraveyardTests;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n@@ -602,12 +604,21 @@ public MetaData.Builder put(MetaData.Builder builder, MetaData.Custom part) {\n \n @Override\n public MetaData.Builder remove(MetaData.Builder builder, String name) {\n- return builder.removeCustom(name);\n+ if (IndexGraveyard.TYPE.equals(name)) {\n+ // there must always be at least an empty graveyard\n+ return builder.indexGraveyard(IndexGraveyard.builder().build());\n+ } else {\n+ return builder.removeCustom(name);\n+ }\n }\n \n @Override\n public MetaData.Custom randomCreate(String name) {\n- return new RepositoriesMetaData();\n+ if (randomBoolean()) {\n+ return new RepositoriesMetaData();\n+ } else {\n+ return IndexGraveyardTests.createRandom();\n+ }\n }\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/cluster/ClusterStateDiffIT.java", "status": "modified" }, { "diff": "@@ -0,0 +1,163 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.metadata;\n+\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.io.stream.ByteBufferStreamInput;\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.IOException;\n+import java.nio.ByteBuffer;\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.stream.Collectors;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.lessThan;\n+import static org.hamcrest.Matchers.not;\n+\n+/**\n+ * Tests for the {@link IndexGraveyard} class\n+ */\n+public class IndexGraveyardTests extends ESTestCase {\n+\n+ public void testEquals() {\n+ final IndexGraveyard graveyard = createRandom();\n+ assertThat(graveyard, equalTo(IndexGraveyard.builder(graveyard).build()));\n+ final IndexGraveyard.Builder newGraveyard = IndexGraveyard.builder(graveyard);\n+ newGraveyard.addTombstone(new Index(randomAsciiOfLengthBetween(4, 15), Strings.randomBase64UUID()));\n+ assertThat(newGraveyard.build(), not(graveyard));\n+ }\n+\n+ public void testSerialization() throws IOException {\n+ final IndexGraveyard graveyard = createRandom();\n+ final BytesStreamOutput out = new BytesStreamOutput();\n+ graveyard.writeTo(out);\n+ final ByteBufferStreamInput in = new ByteBufferStreamInput(ByteBuffer.wrap(out.bytes().toBytes()));\n+ assertThat(IndexGraveyard.fromStream(in), equalTo(graveyard));\n+ }\n+\n+ public void testXContent() throws IOException {\n+ final IndexGraveyard graveyard = createRandom();\n+ final XContentBuilder builder = JsonXContent.contentBuilder();\n+ builder.startObject();\n+ graveyard.toXContent(builder, ToXContent.EMPTY_PARAMS);\n+ builder.endObject();\n+ XContentParser parser = XContentType.JSON.xContent().createParser(builder.bytes());\n+ parser.nextToken(); // the beginning of the parser\n+ assertThat(IndexGraveyard.PROTO.fromXContent(parser), equalTo(graveyard));\n+ }\n+\n+ public void testAddTombstones() {\n+ final IndexGraveyard graveyard1 = createRandom();\n+ final IndexGraveyard.Builder graveyardBuidler = IndexGraveyard.builder(graveyard1);\n+ final int numAdds = randomIntBetween(0, 4);\n+ for (int j = 0; j < numAdds; j++) {\n+ graveyardBuidler.addTombstone(new Index(\"nidx-\" + j, Strings.randomBase64UUID()));\n+ }\n+ final IndexGraveyard graveyard2 = graveyardBuidler.build();\n+ if (numAdds == 0) {\n+ assertThat(graveyard2, equalTo(graveyard1));\n+ } else {\n+ assertThat(graveyard2, not(graveyard1));\n+ assertThat(graveyard1.getTombstones().size(), lessThan(graveyard2.getTombstones().size()));\n+ assertThat(Collections.indexOfSubList(graveyard2.getTombstones(), graveyard1.getTombstones()), equalTo(0));\n+ }\n+ }\n+\n+ public void testPurge() {\n+ // try with max tombstones as some positive integer\n+ executePurgeTestWithMaxTombstones(randomIntBetween(1, 20));\n+ // try with max tombstones as the default\n+ executePurgeTestWithMaxTombstones(IndexGraveyard.SETTING_MAX_TOMBSTONES.getDefault(Settings.EMPTY));\n+ }\n+\n+ public void testDiffs() {\n+ IndexGraveyard.Builder graveyardBuilder = IndexGraveyard.builder();\n+ final int numToPurge = randomIntBetween(0, 4);\n+ final List<Index> removals = new ArrayList<>();\n+ for (int i = 0; i < numToPurge; i++) {\n+ final Index indexToRemove = new Index(\"ridx-\" + i, Strings.randomBase64UUID());\n+ graveyardBuilder.addTombstone(indexToRemove);\n+ removals.add(indexToRemove);\n+ }\n+ final int numTombstones = randomIntBetween(0, 4);\n+ for (int i = 0; i < numTombstones; i++) {\n+ graveyardBuilder.addTombstone(new Index(\"idx-\" + i, Strings.randomBase64UUID()));\n+ }\n+ final IndexGraveyard graveyard1 = graveyardBuilder.build();\n+ graveyardBuilder = IndexGraveyard.builder(graveyard1);\n+ final int numToAdd = randomIntBetween(0, 4);\n+ final List<Index> additions = new ArrayList<>();\n+ for (int i = 0; i < numToAdd; i++) {\n+ final Index indexToAdd = new Index(\"nidx-\" + i, Strings.randomBase64UUID());\n+ graveyardBuilder.addTombstone(indexToAdd);\n+ additions.add(indexToAdd);\n+ }\n+ final IndexGraveyard graveyard2 = graveyardBuilder.build(settingsWithMaxTombstones(numTombstones + numToAdd));\n+ final int numPurged = graveyardBuilder.getNumPurged();\n+ assertThat(numPurged, equalTo(numToPurge));\n+ final IndexGraveyard.IndexGraveyardDiff diff = new IndexGraveyard.IndexGraveyardDiff(graveyard1, graveyard2);\n+ final List<Index> actualAdded = diff.getAdded().stream().map(t -> t.getIndex()).collect(Collectors.toList());\n+ assertThat(new HashSet<>(actualAdded), equalTo(new HashSet<>(additions)));\n+ assertThat(diff.getRemovedCount(), equalTo(removals.size()));\n+ }\n+\n+ public static IndexGraveyard createRandom() {\n+ final IndexGraveyard.Builder graveyard = IndexGraveyard.builder();\n+ final int numTombstones = randomIntBetween(0, 4);\n+ for (int i = 0; i < numTombstones; i++) {\n+ graveyard.addTombstone(new Index(\"idx-\" + i, Strings.randomBase64UUID()));\n+ }\n+ return graveyard.build();\n+ }\n+\n+ private void executePurgeTestWithMaxTombstones(final int maxTombstones) {\n+ final int numExtra = randomIntBetween(1, 10);\n+ final IndexGraveyard.Builder graveyardBuilder = createWithDeletions(maxTombstones + numExtra);\n+ final IndexGraveyard graveyard = graveyardBuilder.build(settingsWithMaxTombstones(maxTombstones));\n+ final int numPurged = graveyardBuilder.getNumPurged();\n+ assertThat(numPurged, equalTo(numExtra));\n+ assertThat(graveyard.getTombstones().size(), equalTo(maxTombstones));\n+ }\n+\n+ private static IndexGraveyard.Builder createWithDeletions(final int numAdd) {\n+ final IndexGraveyard.Builder graveyard = IndexGraveyard.builder();\n+ for (int i = 0; i < numAdd; i++) {\n+ graveyard.addTombstone(new Index(\"idx-\" + i, Strings.randomBase64UUID()));\n+ }\n+ return graveyard;\n+ }\n+\n+ private static Settings settingsWithMaxTombstones(final int maxTombstones) {\n+ return Settings.builder().put(IndexGraveyard.SETTING_MAX_TOMBSTONES.getKey(), maxTombstones).build();\n+ }\n+\n+}", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/IndexGraveyardTests.java", "status": "added" }, { "diff": "@@ -20,13 +20,21 @@\n package org.elasticsearch.cluster.metadata;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.io.stream.ByteBufferStreamInput;\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n+import java.nio.ByteBuffer;\n \n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n@@ -147,4 +155,38 @@ public void testUnknownFieldIndexMetaData() throws IOException {\n assertEquals(\"Unexpected field [random]\", e.getMessage());\n }\n }\n+\n+ public void testMetaDataGlobalStateChangesOnIndexDeletions() {\n+ IndexGraveyard.Builder builder = IndexGraveyard.builder();\n+ builder.addTombstone(new Index(\"idx1\", Strings.randomBase64UUID()));\n+ final MetaData metaData1 = MetaData.builder().indexGraveyard(builder.build()).build();\n+ builder = IndexGraveyard.builder(metaData1.indexGraveyard());\n+ builder.addTombstone(new Index(\"idx2\", Strings.randomBase64UUID()));\n+ final MetaData metaData2 = MetaData.builder(metaData1).indexGraveyard(builder.build()).build();\n+ assertFalse(\"metadata not equal after adding index deletions\", MetaData.isGlobalStateEquals(metaData1, metaData2));\n+ final MetaData metaData3 = MetaData.builder(metaData2).build();\n+ assertTrue(\"metadata equal when not adding index deletions\", MetaData.isGlobalStateEquals(metaData2, metaData3));\n+ }\n+\n+ public void testXContentWithIndexGraveyard() throws IOException {\n+ final IndexGraveyard graveyard = IndexGraveyardTests.createRandom();\n+ final MetaData originalMeta = MetaData.builder().indexGraveyard(graveyard).build();\n+ final XContentBuilder builder = JsonXContent.contentBuilder();\n+ builder.startObject();\n+ originalMeta.toXContent(builder, ToXContent.EMPTY_PARAMS);\n+ builder.endObject();\n+ XContentParser parser = XContentType.JSON.xContent().createParser(builder.bytes());\n+ final MetaData fromXContentMeta = MetaData.PROTO.fromXContent(parser, null);\n+ assertThat(fromXContentMeta.indexGraveyard(), equalTo(originalMeta.indexGraveyard()));\n+ }\n+\n+ public void testSerializationWithIndexGraveyard() throws IOException {\n+ final IndexGraveyard graveyard = IndexGraveyardTests.createRandom();\n+ final MetaData originalMeta = MetaData.builder().indexGraveyard(graveyard).build();\n+ final BytesStreamOutput out = new BytesStreamOutput();\n+ originalMeta.writeTo(out);\n+ final ByteBufferStreamInput in = new ByteBufferStreamInput(ByteBuffer.wrap(out.bytes().toBytes()));\n+ final MetaData fromStreamMeta = MetaData.PROTO.readFrom(in);\n+ assertThat(fromStreamMeta.indexGraveyard(), equalTo(fromStreamMeta.indexGraveyard()));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataTests.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.get.GetResponse;\n+import org.elasticsearch.client.Client;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -37,6 +38,7 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n+import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.indices.IndexClosedException;\n@@ -46,6 +48,11 @@\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n import org.elasticsearch.test.InternalTestCluster.RestartCallback;\n \n+import java.io.IOException;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.util.List;\n+\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n@@ -272,66 +279,6 @@ public void testTwoNodesSingleDoc() throws Exception {\n }\n }\n \n- public void testDanglingIndicesConflictWithAlias() throws Exception {\n- logger.info(\"--> starting two nodes\");\n- internalCluster().startNodesAsync(2).get();\n-\n- logger.info(\"--> indexing a simple document\");\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field1\", \"value1\").setRefresh(true).execute().actionGet();\n-\n- logger.info(\"--> waiting for green status\");\n- ensureGreen();\n-\n- logger.info(\"--> verify 1 doc in the index\");\n- for (int i = 0; i < 10; i++) {\n- assertHitCount(client().prepareSearch().setQuery(matchAllQuery()).get(), 1L);\n- }\n- assertThat(client().prepareGet(\"test\", \"type1\", \"1\").execute().actionGet().isExists(), equalTo(true));\n-\n- internalCluster().stopRandomNonMasterNode();\n-\n- // wait for master to processed node left (so delete won't timeout waiting for it)\n- assertFalse(client().admin().cluster().prepareHealth().setWaitForNodes(\"1\").get().isTimedOut());\n-\n- logger.info(\"--> deleting index\");\n- assertAcked(client().admin().indices().prepareDelete(\"test\"));\n-\n- index(\"test2\", \"type1\", \"2\", \"{}\");\n-\n- logger.info(\"--> creating index with an alias\");\n- assertAcked(client().admin().indices().prepareAliases().addAlias(\"test2\", \"test\"));\n-\n- logger.info(\"--> starting node back up\");\n- internalCluster().startNode();\n-\n- ensureGreen();\n-\n- // make sure that any other events were processed\n- assertFalse(client().admin().cluster().prepareHealth().setWaitForRelocatingShards(0).setWaitForEvents(Priority.LANGUID).get()\n- .isTimedOut());\n-\n- logger.info(\"--> verify we read the right thing through alias\");\n- assertThat(client().prepareGet(\"test\", \"type1\", \"2\").execute().actionGet().isExists(), equalTo(true));\n-\n- logger.info(\"--> deleting alias\");\n- assertAcked(client().admin().indices().prepareAliases().removeAlias(\"test2\", \"test\"));\n-\n- logger.info(\"--> waiting for dangling index to be imported\");\n-\n- assertBusy(new Runnable() {\n- @Override\n- public void run() {\n- assertTrue(client().admin().indices().prepareExists(\"test\").execute().actionGet().isExists());\n- }\n- });\n-\n- ensureGreen();\n-\n- logger.info(\"--> verifying dangling index contains doc\");\n-\n- assertThat(client().prepareGet(\"test\", \"type1\", \"1\").execute().actionGet().isExists(), equalTo(true));\n- }\n-\n public void testDanglingIndices() throws Exception {\n logger.info(\"--> starting two nodes\");\n \n@@ -382,6 +329,76 @@ public Settings onNodeStopped(String nodeName) throws Exception {\n assertThat(client().prepareGet(\"test\", \"type1\", \"1\").execute().actionGet().isExists(), equalTo(true));\n }\n \n+ /**\n+ * This test ensures that when an index deletion takes place while a node is offline, when that\n+ * node rejoins the cluster, it deletes the index locally instead of importing it as a dangling index.\n+ */\n+ public void testIndexDeletionWhenNodeRejoins() throws Exception {\n+ final String indexName = \"test-index-del-on-node-rejoin-idx\";\n+ final int numNodes = 2;\n+\n+ final List<String> nodes;\n+ if (randomBoolean()) {\n+ // test with a regular index\n+ logger.info(\"--> starting a cluster with \" + numNodes + \" nodes\");\n+ nodes = internalCluster().startNodesAsync(numNodes).get();\n+ logger.info(\"--> create an index\");\n+ createIndex(indexName);\n+ } else {\n+ // test with a shadow replica index\n+ final Path dataPath = createTempDir();\n+ logger.info(\"--> created temp data path for shadow replicas [\" + dataPath + \"]\");\n+ logger.info(\"--> starting a cluster with \" + numNodes + \" nodes\");\n+ final Settings nodeSettings = Settings.builder()\n+ .put(\"node.add_id_to_custom_path\", false)\n+ .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), dataPath.toString())\n+ .put(\"index.store.fs.fs_lock\", randomFrom(\"native\", \"simple\"))\n+ .build();\n+ nodes = internalCluster().startNodesAsync(numNodes, nodeSettings).get();\n+ logger.info(\"--> create a shadow replica index\");\n+ createShadowReplicaIndex(indexName, dataPath, numNodes - 1);\n+ }\n+\n+ logger.info(\"--> waiting for green status\");\n+ ensureGreen();\n+ final String indexUUID = resolveIndex(indexName).getUUID();\n+\n+ logger.info(\"--> restart a random date node, deleting the index in between stopping and restarting\");\n+ internalCluster().restartRandomDataNode(new RestartCallback() {\n+ @Override\n+ public Settings onNodeStopped(final String nodeName) throws Exception {\n+ nodes.remove(nodeName);\n+ logger.info(\"--> stopped node[{}], remaining nodes {}\", nodeName, nodes);\n+ assert nodes.size() > 0;\n+ final String otherNode = nodes.get(0);\n+ logger.info(\"--> delete index and verify it is deleted\");\n+ final Client client = client(otherNode);\n+ client.admin().indices().prepareDelete(indexName).execute().actionGet();\n+ assertFalse(client.admin().indices().prepareExists(indexName).execute().actionGet().isExists());\n+ return super.onNodeStopped(nodeName);\n+ }\n+ });\n+\n+ logger.info(\"--> wait until all nodes are back online\");\n+ client().admin().cluster().health(Requests.clusterHealthRequest().waitForEvents(Priority.LANGUID)\n+ .waitForNodes(Integer.toString(numNodes))).actionGet();\n+\n+ logger.info(\"--> waiting for green status\");\n+ ensureGreen();\n+\n+ logger.info(\"--> verify that the deleted index is removed from the cluster and not reimported as dangling by the restarted node\");\n+ assertFalse(client().admin().indices().prepareExists(indexName).execute().actionGet().isExists());\n+ assertBusy(() -> {\n+ final NodeEnvironment nodeEnv = internalCluster().getInstance(NodeEnvironment.class);\n+ try {\n+ assertFalse(\"index folder \" + indexUUID + \" should be deleted\", nodeEnv.availableIndexFolders().contains(indexUUID));\n+ } catch (IOException e) {\n+ logger.error(\"Unable to retrieve available index folders from the node\", e);\n+ fail(\"Unable to retrieve available index folders from the node\");\n+ }\n+ });\n+ }\n+\n /**\n * This test really tests worst case scenario where we have a broken setting or any setting that prevents an index from being\n * allocated in our metadata that we recover. In that case we now have the ability to check the index on local recovery from disk\n@@ -540,4 +557,23 @@ public void testArchiveBrokenClusterSettings() throws Exception {\n + ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey()));\n assertHitCount(client().prepareSearch().setQuery(matchAllQuery()).get(), 1L);\n }\n+\n+\n+ /**\n+ * Creates a shadow replica index and asserts that the index creation was acknowledged.\n+ * Can only be invoked on a cluster where each node has been configured with shared data\n+ * paths and the other necessary settings for shadow replicas.\n+ */\n+ private void createShadowReplicaIndex(final String name, final Path dataPath, final int numReplicas) {\n+ assert Files.exists(dataPath);\n+ assert numReplicas >= 0;\n+ final Settings idxSettings = Settings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, numReplicas)\n+ .put(IndexMetaData.SETTING_DATA_PATH, dataPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n+ .build();\n+ assertAcked(prepareCreate(name).setSettings(idxSettings).get());\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/gateway/GatewayIndexStateIT.java", "status": "modified" }, { "diff": "@@ -29,15 +29,18 @@\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexGraveyard;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n@@ -283,6 +286,9 @@ public void testLoadState() throws IOException {\n assertThat(deserialized.getNumberOfShards(), equalTo(original.getNumberOfShards()));\n }\n \n+ // make sure the index tombstones are the same too\n+ assertThat(loadedMetaData.indexGraveyard(), equalTo(latestMetaData.indexGraveyard()));\n+\n // now corrupt all the latest ones and make sure we fail to load the state\n if (numStates > numLegacy) {\n for (int i = 0; i < dirs.length; i++) {\n@@ -322,6 +328,12 @@ private MetaData randomMeta() throws IOException {\n for (int i = 0; i < numIndices; i++) {\n mdBuilder.put(indexBuilder(randomAsciiOfLength(10) + \"idx-\"+i));\n }\n+ int numDelIndices = randomIntBetween(0, 5);\n+ final IndexGraveyard.Builder graveyard = IndexGraveyard.builder();\n+ for (int i = 0; i < numDelIndices; i++) {\n+ graveyard.addTombstone(new Index(randomAsciiOfLength(10) + \"del-idx-\" + i, Strings.randomBase64UUID()));\n+ }\n+ mdBuilder.indexGraveyard(graveyard.build());\n return mdBuilder.build();\n }\n ", "filename": "core/src/test/java/org/elasticsearch/gateway/MetaDataStateFormatTests.java", "status": "modified" }, { "diff": "@@ -21,10 +21,18 @@\n \n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.test.ESTestCase;\n \n+import java.io.IOException;\n+\n import static org.apache.lucene.util.TestUtil.randomSimpleString;\n import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.not;\n \n public class IndexTests extends ESTestCase {\n@@ -41,4 +49,15 @@ public void testToString() {\n assertThat(random.toString(), containsString(random.getUUID()));\n }\n }\n+\n+ public void testXContent() throws IOException {\n+ final String name = randomAsciiOfLengthBetween(4, 15);\n+ final String uuid = Strings.randomBase64UUID();\n+ final Index original = new Index(name, uuid);\n+ final XContentBuilder builder = JsonXContent.contentBuilder();\n+ original.toXContent(builder, ToXContent.EMPTY_PARAMS);\n+ XContentParser parser = XContentType.JSON.xContent().createParser(builder.bytes());\n+ parser.nextToken(); // the beginning of the parser\n+ assertThat(Index.fromXContent(parser), equalTo(original));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/IndexTests.java", "status": "modified" }, { "diff": "@@ -20,24 +20,38 @@\n \n import org.apache.lucene.store.LockObtainFailedException;\n import org.elasticsearch.Version;\n+import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.AliasAction;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.gateway.GatewayMetaState;\n+import org.elasticsearch.gateway.LocalAllocateDangledIndices;\n+import org.elasticsearch.gateway.MetaStateService;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.ShardPath;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.elasticsearch.test.IndexSettingsModule;\n \n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.not;\n \n public class IndicesServiceTests extends ESSingleNodeTestCase {\n \n@@ -54,9 +68,8 @@ protected boolean resetNodeAfterTest() {\n return true;\n }\n \n- public void testCanDeleteIndexContent() {\n- IndicesService indicesService = getIndicesService();\n-\n+ public void testCanDeleteIndexContent() throws IOException {\n+ final IndicesService indicesService = getIndicesService();\n IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(\"test\", Settings.builder()\n .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n .put(IndexMetaData.SETTING_DATA_PATH, \"/foo/bar\")\n@@ -194,4 +207,94 @@ public void testPendingTasks() throws Exception {\n \n }\n \n+ public void testVerifyIfIndexContentDeleted() throws Exception {\n+ final Index index = new Index(\"test\", Strings.randomBase64UUID());\n+ final IndicesService indicesService = getIndicesService();\n+ final NodeEnvironment nodeEnv = getNodeEnvironment();\n+ final MetaStateService metaStateService = getInstanceFromNode(MetaStateService.class);\n+\n+ final ClusterService clusterService = getInstanceFromNode(ClusterService.class);\n+ final Settings idxSettings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .build();\n+ final IndexMetaData indexMetaData = new IndexMetaData.Builder(index.getName())\n+ .settings(idxSettings)\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)\n+ .build();\n+ metaStateService.writeIndex(\"test index being created\", indexMetaData);\n+ final MetaData metaData = MetaData.builder(clusterService.state().metaData()).put(indexMetaData, true).build();\n+ final ClusterState csWithIndex = new ClusterState.Builder(clusterService.state()).metaData(metaData).build();\n+ try {\n+ indicesService.verifyIndexIsDeleted(index, csWithIndex);\n+ fail(\"Should not be able to delete index contents when the index is part of the cluster state.\");\n+ } catch (IllegalStateException e) {\n+ assertThat(e.getMessage(), containsString(\"Cannot delete index\"));\n+ }\n+\n+ final ClusterState withoutIndex = new ClusterState.Builder(csWithIndex)\n+ .metaData(MetaData.builder(csWithIndex.metaData()).remove(index.getName()))\n+ .build();\n+ indicesService.verifyIndexIsDeleted(index, withoutIndex);\n+ assertFalse(\"index files should be deleted\", FileSystemUtils.exists(nodeEnv.indexPaths(index)));\n+ }\n+\n+ public void testDanglingIndicesWithAliasConflict() throws Exception {\n+ final String indexName = \"test-idx1\";\n+ final String alias = \"test-alias\";\n+ final IndicesService indicesService = getIndicesService();\n+ final ClusterService clusterService = getInstanceFromNode(ClusterService.class);\n+ final IndexService test = createIndex(indexName);\n+\n+ // create the alias for the index\n+ AliasAction action = new AliasAction(AliasAction.Type.ADD, indexName, alias);\n+ IndicesAliasesRequest request = new IndicesAliasesRequest().addAliasAction(action);\n+ client().admin().indices().aliases(request).actionGet();\n+ final ClusterState originalState = clusterService.state();\n+\n+ // try to import a dangling index with the same name as the alias, it should fail\n+ final LocalAllocateDangledIndices dangling = getInstanceFromNode(LocalAllocateDangledIndices.class);\n+ final Settings idxSettings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, Strings.randomBase64UUID())\n+ .build();\n+ final IndexMetaData indexMetaData = new IndexMetaData.Builder(alias)\n+ .settings(idxSettings)\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)\n+ .build();\n+ DanglingListener listener = new DanglingListener();\n+ dangling.allocateDangled(Arrays.asList(indexMetaData), listener);\n+ listener.latch.await();\n+ assertThat(clusterService.state(), equalTo(originalState));\n+\n+ // remove the alias\n+ action = new AliasAction(AliasAction.Type.REMOVE, indexName, alias);\n+ request = new IndicesAliasesRequest().addAliasAction(action);\n+ client().admin().indices().aliases(request).actionGet();\n+\n+ // now try importing a dangling index with the same name as the alias, it should succeed.\n+ listener = new DanglingListener();\n+ dangling.allocateDangled(Arrays.asList(indexMetaData), listener);\n+ listener.latch.await();\n+ assertThat(clusterService.state(), not(originalState));\n+ assertNotNull(clusterService.state().getMetaData().index(alias));\n+\n+ // cleanup\n+ indicesService.deleteIndex(test.index(), \"finished with test\");\n+ }\n+\n+ private static class DanglingListener implements LocalAllocateDangledIndices.Listener {\n+ final CountDownLatch latch = new CountDownLatch(1);\n+\n+ @Override\n+ public void onResponse(LocalAllocateDangledIndices.AllocateDangledResponse response) {\n+ latch.countDown();\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ latch.countDown();\n+ }\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesServiceTests.java", "status": "modified" }, { "diff": "@@ -826,6 +826,10 @@ void restart(RestartCallback callback) throws Exception {\n IOUtils.rm(nodeEnv.nodeDataPaths());\n }\n }\n+ startNewNode(newSettings);\n+ }\n+\n+ private void startNewNode(final Settings newSettings) {\n final long newIdSeed = DiscoveryNodeService.NODE_ID_SEED_SETTING.get(node.settings()) + 1; // use a new seed to make sure we have new node id\n Settings finalSettings = Settings.builder().put(node.settings()).put(newSettings).put(DiscoveryNodeService.NODE_ID_SEED_SETTING.getKey(), newIdSeed).build();\n Collection<Class<? extends Plugin>> plugins = node.getPlugins();\n@@ -1189,7 +1193,7 @@ public synchronized void stopCurrentMasterNode() throws IOException {\n }\n \n /**\n- * Stops the any of the current nodes but not the master node.\n+ * Stops any of the current nodes but not the master node.\n */\n public void stopRandomNonMasterNode() throws IOException {\n NodeAndClient nodeAndClient = getRandomNodeAndClient(new MasterNodePredicate(getMasterName()).negate());", "filename": "test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java", "status": "modified" } ] }
{ "body": "Currently, `FiltersAggregator::buildEmptyAggregation` does not check if `other_bucket_key` was set on the request. This results in a missing `other` bucket in the response when no documents were matched. Is this intended? \n\nHere's a snippet from a date histogram agg with a filters agg.\n\n```\n{\n \"key_as_string\": \"2015-10-01 00:00:00.000\",\n \"key\": 1443657600000,\n \"doc_count\": 0,\n \"ex\": {\n \"buckets\": {\n \"a: {\n \"doc_count\": 0\n },\n \"b\": {\n \"doc_count\": 0\n },\n \"c\": {\n \"doc_count\": 0\n }\n }\n }\n},\n{\n \"key_as_string\": \"2015-11-01 00:00:00.000\",\n \"key\": 1446336000000,\n \"doc_count\": 5,\n \"ex\": {\n \"buckets\": {\n \"a\": {\n \"doc_count\": 0\n },\n \"b\": {\n \"doc_count\": 3\n },\n \"c\": {\n \"doc_count\": 2\n },\n \"other\": {\n \"doc_count\": 0\n }\n }\n }\n}\n\n\n\n```\n", "comments": [ { "body": "This is a bug and should be an easy fix. Thanks for raising this @pjo256 \n", "created_at": "2016-03-22T15:45:11Z" }, { "body": "Sorry @pjo256 I didn't see your PR before raising my own. Thanks for contributing, your PR looks good so I'll merge that one :smile: \n", "created_at": "2016-03-23T09:17:59Z" }, { "body": "@colings86 Thanks :smiley: \n", "created_at": "2016-03-23T15:59:42Z" } ], "number": 16546, "title": "Missing other bucket in FiltersAggregation" }
{ "body": "Closes #16546 \n", "number": 17264, "review_comments": [], "title": "Setting 'other' bucket on empty aggregation" }
{ "commits": [ { "message": "Setting 'other' bucket on empty aggregation" } ], "files": [ { "diff": "@@ -194,6 +194,12 @@ public InternalAggregation buildEmptyAggregation() {\n InternalFilters.InternalBucket bucket = new InternalFilters.InternalBucket(keys[i], 0, subAggs, keyed);\n buckets.add(bucket);\n }\n+\n+ if (showOtherBucket) {\n+ InternalFilters.InternalBucket bucket = new InternalFilters.InternalBucket(otherBucketKey, 0, subAggs, keyed);\n+ buckets.add(bucket);\n+ }\n+\n return new InternalFilters(name, buckets, keyed, pipelineAggregators(), metaData());\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java", "status": "modified" }, { "diff": "@@ -435,4 +435,26 @@ public void testOtherWithSubAggregation() throws Exception {\n assertThat((double) propertiesCounts[2], equalTo((double) sum / numOtherDocs));\n }\n \n+ public void testEmptyAggregationWithOtherBucket() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"empty_bucket_idx\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(histogram(\"histo\").field(\"value\").interval(1L).minDocCount(0)\n+ .subAggregation(filters(\"filters\", new KeyedFilter(\"foo\", matchAllQuery())).otherBucket(true).otherBucketKey(\"bar\")))\n+ .execute().actionGet();\n+\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(2L));\n+ Histogram histo = searchResponse.getAggregations().get(\"histo\");\n+ assertThat(histo, Matchers.notNullValue());\n+ Histogram.Bucket bucket = histo.getBuckets().get(1);\n+ assertThat(bucket, Matchers.notNullValue());\n+\n+ Filters filters = bucket.getAggregations().get(\"filters\");\n+ assertThat(filters, notNullValue());\n+\n+ Filters.Bucket other = filters.getBucketByKey(\"bar\");\n+ assertThat(other, Matchers.notNullValue());\n+ assertThat(other.getKeyAsString(), equalTo(\"bar\"));\n+ assertThat(other.getDocCount(), is(0L));\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/FiltersIT.java", "status": "modified" } ] }
{ "body": "Not sure if I am missing something here (e.g. incorrect format of dictionary file? see below) ,however Loading a dictionary file for `dictionary_decompounder` as documented in our [docs](https://www.elastic.co/guide/en/elasticsearch/reference/2.2/analysis-compound-word-tokenfilter.html#analysis-compound-word-tokenfilter) fails.\n\nES version\n\n```\n{\n \"name\": \"Jean Grey\",\n \"cluster_name\": \"elasticsearch\",\n \"version\": {\n \"number\": \"2.2.1\",\n \"build_hash\": \"d045fc29d1932bce18b2e65ab8b297fbf6cd41a1\",\n \"build_timestamp\": \"2016-03-09T09:38:54Z\",\n \"build_snapshot\": false,\n \"lucene_version\": \"5.4.1\"\n },\n \"tagline\": \"You Know, for Search\"\n}\n```\n\nRepro steps\n\n1) place dictionary file and FOP XML hyphenation pattern file under $ES_HOME/config\n2) launch\n\n```\nPUT my_index\n{\n \"settings\": {\n \"index\": {\n \"analysis\": {\n \"analyzer\": {\n \"myAnalyzer2\": {\n \"type\": \"custom\",\n \"tokenizer\": \"standard\",\n \"filter\": [\n \"myTokenFilter2\"\n ]\n }\n },\n \"filter\": {\n \"myTokenFilter2\": {\n \"type\": \"hyphenation_decompounder\",\n \"hyphenation_patterns_path\": \"de.xml\",\n \"word_list_path\": \"germany.txt\",\n \"max_subword_size\": 22\n }\n }\n }\n }\n },\n \"mappings\": {\n \"type1\": {\n \"properties\": {\n \"field1\": {\n \"type\": \"string\",\n \"anlyzer\": \"myAnalyzer2\"\n }\n }\n }\n }\n} \n```\n\nResponse\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"index_creation_exception\",\n \"reason\": \"failed to create index\"\n }\n ],\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"IOException while reading word_list_path: Input length = 1\"\n },\n \"status\": 400\n}\n```\n\nDictionary file at http://www.md5this.com/wordlists/dictionary_german.zip\n\n```\nAntonios-MacBook-Air:elasticsearch-2.2.1 abonuccelli$ ls -alrth config/germany.txt \n-rw-r--r--@ 1 abonuccelli wheel 20M Mar 21 16:13 config/germany.txt\nAntonios-MacBook-Air:elasticsearch-2.2.1 abonuccelli$ head config/germany.txt && tail config/germany.txt && wc -l config/germany.txt \n00brucellosis\n00faa\n00kiribati\n00mag\n00murree\n00whitebait\n01\n013\n016\n019\n?ppigeren\n?ppigerer\n?ppigeres\n?ppiges\n?ppigkeit\n?ppigste\n?ppigstem\n?ppigsten\n?ppigster\n?ppigstes\n 1744388 config/germany.txt\n```\n\nFOP XML hyphenation pattern file downloaded from site referenced in docs.https://sourceforge.net/projects/offo/files/offo-hyphenation/1.2/offo-hyphenation_v1.2.zip/download\n\nFull Trace Exception (9 More...swallowed?)\n\n```\nelasticsearch.log-[2016-03-21 16:25:38,683][DEBUG][cluster.service ] [Straw Man] cluster state update task [create-index [my_index], cause [api]] failed\nelasticsearch.log:[my_index] IndexCreationException[failed to create index]; nested: IllegalArgumentException[IOException while reading word_list_path: Input length = 1];\nelasticsearch.log- at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:360)\nelasticsearch.log- at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(MetaDataCreateIndexService.java:309)\nelasticsearch.log- at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)\nelasticsearch.log- at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:458)\nelasticsearch.log- at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)\nelasticsearch.log- at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\nelasticsearch.log- at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\nelasticsearch.log- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\nelasticsearch.log- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\nelasticsearch.log- at java.lang.Thread.run(Thread.java:745)\nelasticsearch.log:Caused by: java.lang.IllegalArgumentException: IOException while reading word_list_path: Input length = 1\nelasticsearch.log- at org.elasticsearch.index.analysis.Analysis.getWordList(Analysis.java:241)\nelasticsearch.log- at org.elasticsearch.index.analysis.Analysis.getWordSet(Analysis.java:209)\nelasticsearch.log- at org.elasticsearch.index.analysis.compound.AbstractCompoundWordTokenFilterFactory.<init>(AbstractCompoundWordTokenFilterFactory.java:49)\nelasticsearch.log- at org.elasticsearch.index.analysis.compound.HyphenationCompoundWordTokenFilterFactory.<init>(HyphenationCompoundWordTokenFilterFactory.java:52)\nelasticsearch.log- at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown Source)\nelasticsearch.log- at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\nelasticsearch.log- at java.lang.reflect.Constructor.newInstance(Constructor.java:408)\nelasticsearch.log- at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\nelasticsearch.log- at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:54)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl$5$1.call(InjectorImpl.java:828)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl$5.get(InjectorImpl.java:823)\nelasticsearch.log- at org.elasticsearch.common.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:236)\nelasticsearch.log- at com.sun.proxy.$Proxy19.create(Unknown Source)\nelasticsearch.log- at org.elasticsearch.index.analysis.AnalysisService.<init>(AnalysisService.java:161)\nelasticsearch.log- at org.elasticsearch.index.analysis.AnalysisService.<init>(AnalysisService.java:66)\nelasticsearch.log- at sun.reflect.GeneratedConstructorAccessor28.newInstance(Unknown Source)\nelasticsearch.log- at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\nelasticsearch.log- at java.lang.reflect.Constructor.newInstance(Constructor.java:408)\nelasticsearch.log- at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\nelasticsearch.log- at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\nelasticsearch.log- at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\nelasticsearch.log- at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\nelasticsearch.log- at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\nelasticsearch.log- at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\nelasticsearch.log- at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\nelasticsearch.log- at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\nelasticsearch.log- at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\nelasticsearch.log- at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\nelasticsearch.log- at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:201)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:159)\nelasticsearch.log- at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:55)\nelasticsearch.log- at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:358)\nelasticsearch.log- ... 9 more\nelasticsearch.log-[2016-03-21 16:25:38,683][DEBUG][action.admin.indices.create] [Straw Man] [my_index] failed to create\nelasticsearch.log:[my_index] IndexCreationException[failed to create index]; nested: IllegalArgumentException[IOException while reading word_list_path: Input length = 1];\nelasticsearch.log- at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:360)\nelasticsearch.log- at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(MetaDataCreateIndexService.java:309)\nelasticsearch.log- at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)\nelasticsearch.log- at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:458)\nelasticsearch.log- at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)\nelasticsearch.log- at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\nelasticsearch.log- at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\nelasticsearch.log- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\nelasticsearch.log- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\nelasticsearch.log- at java.lang.Thread.run(Thread.java:745)\nelasticsearch.log:Caused by: java.lang.IllegalArgumentException: IOException while reading word_list_path: Input length = 1\nelasticsearch.log- at org.elasticsearch.index.analysis.Analysis.getWordList(Analysis.java:241)\nelasticsearch.log- at org.elasticsearch.index.analysis.Analysis.getWordSet(Analysis.java:209)\nelasticsearch.log- at org.elasticsearch.index.analysis.compound.AbstractCompoundWordTokenFilterFactory.<init>(AbstractCompoundWordTokenFilterFactory.java:49)\nelasticsearch.log- at org.elasticsearch.index.analysis.compound.HyphenationCompoundWordTokenFilterFactory.<init>(HyphenationCompoundWordTokenFilterFactory.java:52)\nelasticsearch.log- at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown Source)\nelasticsearch.log- at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\nelasticsearch.log- at java.lang.reflect.Constructor.newInstance(Constructor.java:408)\nelasticsearch.log- at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\nelasticsearch.log- at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:54)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl$5$1.call(InjectorImpl.java:828)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl$5.get(InjectorImpl.java:823)\nelasticsearch.log- at org.elasticsearch.common.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:236)\nelasticsearch.log- at com.sun.proxy.$Proxy19.create(Unknown Source)\nelasticsearch.log- at org.elasticsearch.index.analysis.AnalysisService.<init>(AnalysisService.java:161)\nelasticsearch.log- at org.elasticsearch.index.analysis.AnalysisService.<init>(AnalysisService.java:66)\nelasticsearch.log- at sun.reflect.GeneratedConstructorAccessor28.newInstance(Unknown Source)\nelasticsearch.log- at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\nelasticsearch.log- at java.lang.reflect.Constructor.newInstance(Constructor.java:408)\nelasticsearch.log- at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\nelasticsearch.log- at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\nelasticsearch.log- at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\nelasticsearch.log- at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\nelasticsearch.log- at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\nelasticsearch.log- at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\nelasticsearch.log- at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\nelasticsearch.log- at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\nelasticsearch.log- at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\nelasticsearch.log- at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\nelasticsearch.log- at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\nelasticsearch.log- at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\nelasticsearch.log- at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:201)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)\nelasticsearch.log- at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:159)\nelasticsearch.log- at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:55)\nelasticsearch.log- at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:358)\nelasticsearch.log- ... 9 more\n```\n\nI've run lsof on the germany.txt file checking for other process accessing it but can't see anything\n\n```\nAntonios-MacBook-Air:elasticsearch-2.2.1 abonuccelli$ while true; do sudo lsof /opt/elk/PROD/elasticsearch-2.2.1/config/de.xml ; sudo lsof /opt/elk/PROD/elasticsearch-2.2.1/config/germany.txt;date;done\nMon Mar 21 16:25:22 CST 2016\nMon Mar 21 16:25:23 CST 2016\nMon Mar 21 16:25:25 CST 2016\nMon Mar 21 16:25:26 CST 2016\nMon Mar 21 16:25:27 CST 2016\nMon Mar 21 16:25:29 CST 2016\nMon Mar 21 16:25:31 CST 2016\nMon Mar 21 16:25:33 CST 2016\nMon Mar 21 16:25:34 CST 2016\nMon Mar 21 16:25:36 CST 2016\nMon Mar 21 16:25:37 CST 2016\nMon Mar 21 16:25:39 CST 2016\nMon Mar 21 16:25:40 CST 2016\nMon Mar 21 16:25:42 CST 2016\nMon Mar 21 16:25:43 CST 2016\nMon Mar 21 16:25:45 CST 2016\n```\n", "comments": [ { "body": "wrong encoding.\n", "created_at": "2016-03-21T12:40:26Z" }, { "body": "What @rmuir meant is that the file is not UTF-8 encoded. All files that ES accepts must be UTF-8\n", "created_at": "2016-03-21T15:04:12Z" }, { "body": "By the way, i have the feeling an exception may be discarded here.\n\nI feel like it should not be `java.lang.IllegalArgumentException: IOException while reading word_list_path: Input length = 1`, that is not a good error. I just happen to know what it means.\n\nPer the charset api (https://docs.oracle.com/javase/7/docs/api/java/nio/charset/CodingErrorAction.html#REPORT), the original exception should have been a subclass of CharacterCodingException, such as MalformedInputException. This would make these errors easier to understand if we can improve that.\n", "created_at": "2016-03-21T15:21:41Z" }, { "body": "@rmuir it's fine in master but in 2.x we shadow this exception. 5.x already puts the `MalformedInputException` as the cause - I will put up a PR\n", "created_at": "2016-03-22T09:58:24Z" } ], "number": 17212, "title": "dictionary_decompounder loading fails : \"IOException while reading word_list_path: Input length = 1\"" }
{ "body": "This commit fixes string formatting issues in the error handling and\nprovides a better error message if malformed input is detected.\nThis commit also adds tests for both situations.\n\nRelates to #17212\n", "number": 17237, "review_comments": [], "title": "Improve error message if resource files have illegal encoding" }
{ "commits": [ { "message": "Improve error message if resource files have illegal encoding\n\nThis commit fixes string formatting issues in the error handling and\nprovides a bettter error message if malformed input is detected.\nThis commit also adds tests for both situations.\n\nRelates to #17212" } ], "files": [ { "diff": "@@ -441,7 +441,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]MergeSchedulerConfig.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]NodeServicesProvider.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]SearchSlowLog.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]analysis[/\\\\]Analysis.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]analysis[/\\\\]AnalysisRegistry.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]analysis[/\\\\]AnalysisService.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]analysis[/\\\\]CommonGramsTokenFilterFactory.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -67,8 +67,10 @@\n import java.io.BufferedReader;\n import java.io.IOException;\n import java.io.Reader;\n+import java.nio.charset.CharacterCodingException;\n import java.nio.charset.StandardCharsets;\n import java.nio.file.Path;\n+import java.nio.file.Paths;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collection;\n@@ -163,7 +165,8 @@ public static CharArraySet parseStemExclusion(Settings settings, CharArraySet de\n NAMED_STOP_WORDS = unmodifiableMap(namedStopWords);\n }\n \n- public static CharArraySet parseWords(Environment env, Settings settings, String name, CharArraySet defaultWords, Map<String, Set<?>> namedWords, boolean ignoreCase) {\n+ public static CharArraySet parseWords(Environment env, Settings settings, String name, CharArraySet defaultWords,\n+ Map<String, Set<?>> namedWords, boolean ignoreCase) {\n String value = settings.get(name);\n if (value != null) {\n if (\"_none_\".equals(value)) {\n@@ -237,12 +240,17 @@ public static List<String> getWordList(Environment env, Settings settings, Strin\n }\n }\n \n- final Path wordListFile = env.configFile().resolve(wordListPath);\n+ final Path path = env.configFile().resolve(wordListPath);\n \n- try (BufferedReader reader = FileSystemUtils.newBufferedReader(wordListFile.toUri().toURL(), StandardCharsets.UTF_8)) {\n+ try (BufferedReader reader = FileSystemUtils.newBufferedReader(path.toUri().toURL(), StandardCharsets.UTF_8)) {\n return loadWordList(reader, \"#\");\n+ } catch (CharacterCodingException ex) {\n+ String message = String.format(Locale.ROOT,\n+ \"Unsupported character encoding detected while reading %s_path: %s - files must be UTF-8 encoded\",\n+ settingPrefix, path.toString());\n+ throw new IllegalArgumentException(message, ex);\n } catch (IOException ioe) {\n- String message = String.format(Locale.ROOT, \"IOException while reading %s_path: %s\", settingPrefix);\n+ String message = String.format(Locale.ROOT, \"IOException while reading %s_path: %s\", settingPrefix, path.toString());\n throw new IllegalArgumentException(message, ioe);\n }\n }\n@@ -256,7 +264,7 @@ public static List<String> loadWordList(Reader reader, String comment) throws IO\n } else {\n br = new BufferedReader(reader);\n }\n- String word = null;\n+ String word;\n while ((word = br.readLine()) != null) {\n if (!Strings.hasText(word)) {\n continue;\n@@ -283,13 +291,16 @@ public static Reader getReaderFromFile(Environment env, Settings settings, Strin\n if (filePath == null) {\n return null;\n }\n-\n final Path path = env.configFile().resolve(filePath);\n-\n try {\n return FileSystemUtils.newBufferedReader(path.toUri().toURL(), StandardCharsets.UTF_8);\n+ } catch (CharacterCodingException ex) {\n+ String message = String.format(Locale.ROOT,\n+ \"Unsupported character encoding detected while reading %s_path: %s files must be UTF-8 encoded\",\n+ settingPrefix, path.toString());\n+ throw new IllegalArgumentException(message, ex);\n } catch (IOException ioe) {\n- String message = String.format(Locale.ROOT, \"IOException while reading %s_path: %s\", settingPrefix);\n+ String message = String.format(Locale.ROOT, \"IOException while reading %s_path: %s\", settingPrefix, path.toString());\n throw new IllegalArgumentException(message, ioe);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/analysis/Analysis.java", "status": "modified" }, { "diff": "@@ -21,8 +21,23 @@\n \n import org.apache.lucene.analysis.util.CharArraySet;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.env.Environment;\n import org.elasticsearch.test.ESTestCase;\n \n+import java.io.BufferedWriter;\n+import java.io.FileNotFoundException;\n+import java.io.IOException;\n+import java.io.OutputStream;\n+import java.nio.charset.CharacterCodingException;\n+import java.nio.charset.Charset;\n+import java.nio.charset.MalformedInputException;\n+import java.nio.charset.StandardCharsets;\n+import java.nio.file.Files;\n+import java.nio.file.NoSuchFileException;\n+import java.nio.file.Path;\n+import java.util.Arrays;\n+import java.util.List;\n+\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.hamcrest.Matchers.is;\n \n@@ -42,4 +57,55 @@ public void testParseStemExclusion() {\n assertThat(set.contains(\"bar\"), is(true));\n assertThat(set.contains(\"baz\"), is(false));\n }\n+\n+ public void testParseNonExistingFile() {\n+ Path tempDir = createTempDir();\n+ Settings nodeSettings = Settings.builder()\n+ .put(\"foo.bar_path\", tempDir.resolve(\"foo.dict\"))\n+ .put(Environment.PATH_HOME_SETTING.getKey(), tempDir).build();\n+ Environment env = new Environment(nodeSettings);\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n+ () -> Analysis.getWordList(env, nodeSettings, \"foo.bar\"));\n+ assertEquals(\"IOException while reading foo.bar_path: \" + tempDir.resolve(\"foo.dict\").toString(), ex.getMessage());\n+ assertTrue(ex.getCause().toString(), ex.getCause() instanceof FileNotFoundException\n+ || ex.getCause() instanceof NoSuchFileException);\n+ }\n+\n+\n+ public void testParseFalseEncodedFile() throws IOException {\n+ Path tempDir = createTempDir();\n+ Path dict = tempDir.resolve(\"foo.dict\");\n+ Settings nodeSettings = Settings.builder()\n+ .put(\"foo.bar_path\", dict)\n+ .put(Environment.PATH_HOME_SETTING.getKey(), tempDir).build();\n+ try (OutputStream writer = Files.newOutputStream(dict)) {\n+ writer.write(new byte[]{(byte) 0xff, 0x00, 0x00}); // some invalid UTF-8\n+ writer.write('\\n');\n+ }\n+ Environment env = new Environment(nodeSettings);\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n+ () -> Analysis.getWordList(env, nodeSettings, \"foo.bar\"));\n+ assertEquals(\"Unsupported character encoding detected while reading foo.bar_path: \" + tempDir.resolve(\"foo.dict\").toString()\n+ + \" - files must be UTF-8 encoded\" , ex.getMessage());\n+ assertTrue(ex.getCause().toString(), ex.getCause() instanceof MalformedInputException\n+ || ex.getCause() instanceof CharacterCodingException);\n+ }\n+\n+ public void testParseWordList() throws IOException {\n+ Path tempDir = createTempDir();\n+ Path dict = tempDir.resolve(\"foo.dict\");\n+ Settings nodeSettings = Settings.builder()\n+ .put(\"foo.bar_path\", dict)\n+ .put(Environment.PATH_HOME_SETTING.getKey(), tempDir).build();\n+ try (BufferedWriter writer = Files.newBufferedWriter(dict, StandardCharsets.UTF_8)) {\n+ writer.write(\"hello\");\n+ writer.write('\\n');\n+ writer.write(\"world\");\n+ writer.write('\\n');\n+ }\n+ Environment env = new Environment(nodeSettings);\n+ List<String> wordList = Analysis.getWordList(env, nodeSettings, \"foo.bar\");\n+ assertEquals(Arrays.asList(\"hello\", \"world\"), wordList);\n+\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: `5.0.0` (3ed4ff0)\n\n**JVM version**: OpenJDK 64-Bit Server VM (build 25.74-b02, mixed mode)\n\n**OS version**: Linux 4.1.20-1-lts\n\n**Description of the problem including expected versus actual behavior**: When sorting on a single field and passing it in a list you get: `Failed to derive xcontent` error\n\n**Steps to reproduce**:\n1. `curl -X PUT localhost:9200/i/t/42 -d '{\"number\": 123}`'\n2. `curl localhost:9200/_search -d '{\"sort\": [\"number\"]}'`\n", "comments": [ { "body": "This is due to a bug in the work-around for sorting which was put in so we could parse the search request on the coordinating node but defer things that had not yet been refactored (e.g. sort) until we got to the shard. The parsing code stores each element of the sort in a list in the request on the coordinating node and then tries to read each element as JSON on the shard. Which doesn't work if the element is a string instead of JSON. Note that the workaround is only present in 5.0 so this is the only version affected by this bug\n\nThe sort refactoring is almost complete (see #17205) so I this can be closed when that PR is merged. @cbuescher is adding a test to that PR to test this problem and ensure it works in the refactored sort code.\n", "created_at": "2016-03-23T14:31:20Z" }, { "body": "Added test for parsing this kind of syntax (e319985) and just tried this manually on #17205, this should be fixed once that PR is in.\n", "created_at": "2016-03-23T15:09:00Z" }, { "body": "Thanks @cbuescher!\n\nIn the mean time, if anybody finds this, I added a work around to my tests - just do `{\"sort\": \"number\"}` instead.\n", "created_at": "2016-03-23T15:12:56Z" } ], "number": 17257, "title": "Sort on single field as a list produces parse error" }
{ "body": "This is based on #17146 that needs to get in first. It moves the remaining parsing logic for the complete sort-element over to SortBuilder#fromXContent and moves further logic that used to be in SortParseElement concerned with building SortFields and populating the search context with it also to SortBuilder. \nIn a last step for the refactoring the current workaround of using opaque BytesReference lists in SearchSourceBuilder and similar places it removed and the refactored SortBuilders used instead.\n\nCloses #17257\n", "number": 17205, "review_comments": [], "title": "Switch to using refactored SortBuilder instead of using BytesReference in serialization" }
{ "commits": [ { "message": "Switch to using refactored SortBuilder in SearchSourceBuilder and elsewhere\n\nSwitching from using list of BytesReference to real SortBuilder list in\nSearchSourceBuilder, TopHitsAggregatorBuilder and TopHitsAggregatorFactory.\nRemoving SortParseElement and related sort parsers." }, { "message": "SortBuilder#toXContent should render full object" }, { "message": "Merge branch 'master' into sort-use-sortbuilders\n\n Conflicts:\n\tcore/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java" }, { "message": "Adding test for parsing sort on single fields as list" } ], "files": [ { "diff": "@@ -21,7 +21,6 @@\n import org.elasticsearch.action.ActionRequestBuilder;\n import org.elasticsearch.action.get.GetRequest;\n import org.elasticsearch.action.support.IndicesOptions;\n-import org.elasticsearch.action.support.broadcast.BroadcastOperationRequestBuilder;\n import org.elasticsearch.client.ElasticsearchClient;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -127,7 +126,7 @@ public PercolateRequestBuilder setSortByScore(boolean sort) {\n /**\n * Delegates to {@link PercolateSourceBuilder#addSort(SortBuilder)}\n */\n- public PercolateRequestBuilder addSort(SortBuilder sort) {\n+ public PercolateRequestBuilder addSort(SortBuilder<?> sort) {\n sourceBuilder().addSort(sort);\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/action/percolate/PercolateRequestBuilder.java", "status": "modified" }, { "diff": "@@ -48,13 +48,13 @@\n public class PercolateSourceBuilder extends ToXContentToBytes {\n \n private DocBuilder docBuilder;\n- private QueryBuilder queryBuilder;\n+ private QueryBuilder<?> queryBuilder;\n private Integer size;\n- private List<SortBuilder> sorts;\n+ private List<SortBuilder<?>> sorts;\n private Boolean trackScores;\n private HighlightBuilder highlightBuilder;\n private List<AggregatorBuilder<?>> aggregationBuilders;\n- private List<PipelineAggregatorBuilder> pipelineAggregationBuilders;\n+ private List<PipelineAggregatorBuilder<?>> pipelineAggregationBuilders;\n \n /**\n * Sets the document to run the percolate queries against.\n@@ -68,7 +68,7 @@ public PercolateSourceBuilder setDoc(DocBuilder docBuilder) {\n * Sets a query to reduce the number of percolate queries to be evaluated and score the queries that match based\n * on this query.\n */\n- public PercolateSourceBuilder setQueryBuilder(QueryBuilder queryBuilder) {\n+ public PercolateSourceBuilder setQueryBuilder(QueryBuilder<?> queryBuilder) {\n this.queryBuilder = queryBuilder;\n return this;\n }\n@@ -98,7 +98,7 @@ public PercolateSourceBuilder setSort(boolean sort) {\n *\n * By default the matching percolator queries are returned in an undefined order.\n */\n- public PercolateSourceBuilder addSort(SortBuilder sort) {\n+ public PercolateSourceBuilder addSort(SortBuilder<?> sort) {\n if (sorts == null) {\n sorts = new ArrayList<>();\n }\n@@ -137,7 +137,7 @@ public PercolateSourceBuilder addAggregation(AggregatorBuilder<?> aggregationBui\n /**\n * Add an aggregation definition.\n */\n- public PercolateSourceBuilder addAggregation(PipelineAggregatorBuilder aggregationBuilder) {\n+ public PercolateSourceBuilder addAggregation(PipelineAggregatorBuilder<?> aggregationBuilder) {\n if (pipelineAggregationBuilders == null) {\n pipelineAggregationBuilders = new ArrayList<>();\n }\n@@ -160,10 +160,8 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n if (sorts != null) {\n builder.startArray(\"sort\");\n- for (SortBuilder sort : sorts) {\n- builder.startObject();\n+ for (SortBuilder<?> sort : sorts) {\n sort.toXContent(builder, params);\n- builder.endObject();\n }\n builder.endArray();\n }\n@@ -182,7 +180,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n }\n if (pipelineAggregationBuilders != null) {\n- for (PipelineAggregatorBuilder aggregation : pipelineAggregationBuilders) {\n+ for (PipelineAggregatorBuilder<?> aggregation : pipelineAggregationBuilders) {\n aggregation.toXContent(builder, params);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/percolate/PercolateSourceBuilder.java", "status": "modified" }, { "diff": "@@ -37,12 +37,13 @@\n import org.elasticsearch.common.text.Text;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilder;\n import org.elasticsearch.search.rescore.RescoreBuilder;\n+import org.elasticsearch.search.sort.SortBuilder;\n import org.elasticsearch.search.suggest.SuggestionBuilder;\n import org.elasticsearch.search.suggest.phrase.SmoothingModel;\n import org.elasticsearch.tasks.Task;\n-import org.elasticsearch.search.aggregations.AggregatorBuilder;\n-import org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilder;\n import org.joda.time.DateTime;\n import org.joda.time.DateTimeZone;\n \n@@ -687,21 +688,21 @@ <C> C readNamedWriteable(@SuppressWarnings(\"unused\") Class<C> categoryClass) thr\n /**\n * Reads a {@link AggregatorBuilder} from the current stream\n */\n- public AggregatorBuilder readAggregatorFactory() throws IOException {\n+ public AggregatorBuilder<?> readAggregatorFactory() throws IOException {\n return readNamedWriteable(AggregatorBuilder.class);\n }\n \n /**\n * Reads a {@link PipelineAggregatorBuilder} from the current stream\n */\n- public PipelineAggregatorBuilder readPipelineAggregatorFactory() throws IOException {\n+ public PipelineAggregatorBuilder<?> readPipelineAggregatorFactory() throws IOException {\n return readNamedWriteable(PipelineAggregatorBuilder.class);\n }\n \n /**\n * Reads a {@link QueryBuilder} from the current stream\n */\n- public QueryBuilder readQuery() throws IOException {\n+ public QueryBuilder<?> readQuery() throws IOException {\n return readNamedWriteable(QueryBuilder.class);\n }\n \n@@ -726,6 +727,13 @@ public SuggestionBuilder<?> readSuggestion() throws IOException {\n return readNamedWriteable(SuggestionBuilder.class);\n }\n \n+ /**\n+ * Reads a {@link SortBuilder} from the current stream\n+ */\n+ public SortBuilder<?> readSortBuilder() throws IOException {\n+ return readNamedWriteable(SortBuilder.class);\n+ }\n+\n /**\n * Reads a {@link org.elasticsearch.index.query.functionscore.ScoreFunctionBuilder} from the current stream\n */", "filename": "core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java", "status": "modified" }, { "diff": "@@ -36,13 +36,13 @@\n import org.elasticsearch.common.text.Text;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilder;\n import org.elasticsearch.search.rescore.RescoreBuilder;\n+import org.elasticsearch.search.sort.SortBuilder;\n import org.elasticsearch.search.suggest.SuggestionBuilder;\n-import org.elasticsearch.search.suggest.completion.context.QueryContext;\n import org.elasticsearch.search.suggest.phrase.SmoothingModel;\n import org.elasticsearch.tasks.Task;\n-import org.elasticsearch.search.aggregations.AggregatorBuilder;\n-import org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilder;\n import org.joda.time.ReadableInstant;\n \n import java.io.EOFException;\n@@ -532,7 +532,7 @@ public void writeOptionalStreamable(@Nullable Streamable streamable) throws IOEx\n }\n }\n \n- public void writeOptionalWriteable(@Nullable Writeable writeable) throws IOException {\n+ public void writeOptionalWriteable(@Nullable Writeable<?> writeable) throws IOException {\n if (writeable != null) {\n writeBoolean(true);\n writeable.writeTo(this);\n@@ -663,7 +663,7 @@ public void writeThrowable(Throwable throwable) throws IOException {\n /**\n * Writes a {@link NamedWriteable} to the current stream, by first writing its name and then the object itself\n */\n- void writeNamedWriteable(NamedWriteable namedWriteable) throws IOException {\n+ void writeNamedWriteable(NamedWriteable<?> namedWriteable) throws IOException {\n writeString(namedWriteable.getWriteableName());\n namedWriteable.writeTo(this);\n }\n@@ -685,7 +685,7 @@ public void writePipelineAggregatorBuilder(PipelineAggregatorBuilder<?> builder)\n /**\n * Writes a {@link QueryBuilder} to the current stream\n */\n- public void writeQuery(QueryBuilder queryBuilder) throws IOException {\n+ public void writeQuery(QueryBuilder<?> queryBuilder) throws IOException {\n writeNamedWriteable(queryBuilder);\n }\n \n@@ -745,8 +745,15 @@ public void writeRescorer(RescoreBuilder<?> rescorer) throws IOException {\n /**\n * Writes a {@link SuggestionBuilder} to the current stream\n */\n- public void writeSuggestion(SuggestionBuilder suggestion) throws IOException {\n+ public void writeSuggestion(SuggestionBuilder<?> suggestion) throws IOException {\n writeNamedWriteable(suggestion);\n }\n \n+ /**\n+ * Writes a {@link SortBuilder} to the current stream\n+ */\n+ public void writeSortBuilder(SortBuilder<?> sort) throws IOException {\n+ writeNamedWriteable(sort);\n+ }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java", "status": "modified" }, { "diff": "@@ -27,15 +27,14 @@\n import org.elasticsearch.search.highlight.HighlighterParseElement;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.internal.SubSearchContext;\n-import org.elasticsearch.search.sort.SortParseElement;\n+import org.elasticsearch.search.sort.SortBuilder;\n \n import java.io.IOException;\n \n public class InnerHitsQueryParserHelper {\n \n public static final InnerHitsQueryParserHelper INSTANCE = new InnerHitsQueryParserHelper();\n \n- private static final SortParseElement sortParseElement = new SortParseElement();\n private static final FetchSourceParseElement sourceParseElement = new FetchSourceParseElement();\n private static final HighlighterParseElement highlighterParseElement = new HighlighterParseElement();\n private static final ScriptFieldsParseElement scriptFieldsParseElement = new ScriptFieldsParseElement();\n@@ -54,10 +53,10 @@ public static InnerHitsSubSearchContext parse(XContentParser parser) throws IOEx\n if (\"name\".equals(fieldName)) {\n innerHitName = parser.textOrNull();\n } else {\n- parseCommonInnerHitOptions(parser, token, fieldName, subSearchContext, sortParseElement, sourceParseElement, highlighterParseElement, scriptFieldsParseElement, fieldDataFieldsParseElement);\n+ parseCommonInnerHitOptions(parser, token, fieldName, subSearchContext, sourceParseElement, highlighterParseElement, scriptFieldsParseElement, fieldDataFieldsParseElement);\n }\n } else {\n- parseCommonInnerHitOptions(parser, token, fieldName, subSearchContext, sortParseElement, sourceParseElement, highlighterParseElement, scriptFieldsParseElement, fieldDataFieldsParseElement);\n+ parseCommonInnerHitOptions(parser, token, fieldName, subSearchContext, sourceParseElement, highlighterParseElement, scriptFieldsParseElement, fieldDataFieldsParseElement);\n }\n }\n } catch (Exception e) {\n@@ -67,10 +66,10 @@ public static InnerHitsSubSearchContext parse(XContentParser parser) throws IOEx\n }\n \n public static void parseCommonInnerHitOptions(XContentParser parser, XContentParser.Token token, String fieldName, SubSearchContext subSearchContext,\n- SortParseElement sortParseElement, FetchSourceParseElement sourceParseElement, HighlighterParseElement highlighterParseElement,\n+ FetchSourceParseElement sourceParseElement, HighlighterParseElement highlighterParseElement,\n ScriptFieldsParseElement scriptFieldsParseElement, FieldDataFieldsParseElement fieldDataFieldsParseElement) throws Exception {\n if (\"sort\".equals(fieldName)) {\n- sortParseElement.parse(parser, subSearchContext);\n+ SortBuilder.parseSort(parser, subSearchContext);\n } else if (\"_source\".equals(fieldName)) {\n sourceParseElement.parse(parser, subSearchContext);\n } else if (token == XContentParser.Token.START_OBJECT) {", "filename": "core/src/main/java/org/elasticsearch/index/query/support/InnerHitsQueryParserHelper.java", "status": "modified" }, { "diff": "@@ -226,6 +226,11 @@\n import org.elasticsearch.search.query.QueryPhase;\n import org.elasticsearch.search.rescore.QueryRescorerBuilder;\n import org.elasticsearch.search.rescore.RescoreBuilder;\n+import org.elasticsearch.search.sort.FieldSortBuilder;\n+import org.elasticsearch.search.sort.GeoDistanceSortBuilder;\n+import org.elasticsearch.search.sort.ScoreSortBuilder;\n+import org.elasticsearch.search.sort.ScriptSortBuilder;\n+import org.elasticsearch.search.sort.SortBuilder;\n import org.elasticsearch.search.suggest.Suggester;\n import org.elasticsearch.search.suggest.Suggesters;\n import org.elasticsearch.search.suggest.SuggestionBuilder;\n@@ -346,6 +351,7 @@ protected void configure() {\n configureFetchSubPhase();\n configureShapes();\n configureRescorers();\n+ configureSorts();\n }\n \n protected void configureFetchSubPhase() {\n@@ -489,6 +495,13 @@ private void configureRescorers() {\n namedWriteableRegistry.registerPrototype(RescoreBuilder.class, QueryRescorerBuilder.PROTOTYPE);\n }\n \n+ private void configureSorts() {\n+ namedWriteableRegistry.registerPrototype(SortBuilder.class, GeoDistanceSortBuilder.PROTOTYPE);\n+ namedWriteableRegistry.registerPrototype(SortBuilder.class, ScoreSortBuilder.PROTOTYPE);\n+ namedWriteableRegistry.registerPrototype(SortBuilder.class, ScriptSortBuilder.PROTOTYPE);\n+ namedWriteableRegistry.registerPrototype(SortBuilder.class, FieldSortBuilder.PROTOTYPE);\n+ }\n+\n private void registerBuiltinFunctionScoreParsers() {\n registerFunctionScoreParser(new ScriptScoreFunctionParser());\n registerFunctionScoreParser(new GaussDecayFunctionParser());", "filename": "core/src/main/java/org/elasticsearch/search/SearchModule.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import com.carrotsearch.hppc.ObjectFloatHashMap;\n import org.apache.lucene.search.FieldDoc;\n+import org.apache.lucene.search.Sort;\n import org.apache.lucene.search.TopDocs;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n@@ -41,7 +42,6 @@\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.common.util.concurrent.ConcurrentMapLong;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentLocation;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -92,13 +92,15 @@\n import org.elasticsearch.search.query.ScrollQuerySearchResult;\n import org.elasticsearch.search.rescore.RescoreBuilder;\n import org.elasticsearch.search.searchafter.SearchAfterBuilder;\n+import org.elasticsearch.search.sort.SortBuilder;\n import org.elasticsearch.search.suggest.Suggesters;\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.IOException;\n import java.util.Collections;\n import java.util.HashMap;\n import java.util.Map;\n+import java.util.Optional;\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.ScheduledFuture;\n import java.util.concurrent.atomic.AtomicLong;\n@@ -683,33 +685,13 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc\n context.parsedPostFilter(queryShardContext.toQuery(source.postFilter()));\n }\n if (source.sorts() != null) {\n- XContentParser completeSortParser = null;\n try {\n- XContentBuilder completeSortBuilder = XContentFactory.jsonBuilder();\n- completeSortBuilder.startObject();\n- completeSortBuilder.startArray(\"sort\");\n- for (BytesReference sort : source.sorts()) {\n- XContentParser parser = XContentFactory.xContent(sort).createParser(sort);\n- parser.nextToken();\n- completeSortBuilder.copyCurrentStructure(parser);\n+ Optional<Sort> optionalSort = SortBuilder.buildSort(source.sorts(), context.getQueryShardContext());\n+ if (optionalSort.isPresent()) {\n+ context.sort(optionalSort.get());\n }\n- completeSortBuilder.endArray();\n- completeSortBuilder.endObject();\n- BytesReference completeSortBytes = completeSortBuilder.bytes();\n- completeSortParser = XContentFactory.xContent(completeSortBytes).createParser(completeSortBytes);\n- completeSortParser.nextToken();\n- completeSortParser.nextToken();\n- completeSortParser.nextToken();\n- this.elementParsers.get(\"sort\").parse(completeSortParser, context);\n- } catch (Exception e) {\n- String sSource = \"_na_\";\n- try {\n- sSource = source.toString();\n- } catch (Throwable e1) {\n- // ignore\n- }\n- XContentLocation location = completeSortParser != null ? completeSortParser.getTokenLocation() : null;\n- throw new SearchParseException(context, \"failed to parse sort source [\" + sSource + \"]\", location, e);\n+ } catch (IOException e) {\n+ throw new SearchContextException(context, \"failed to create sort elements\", e);\n }\n }\n context.trackScores(source.trackScores());", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -21,13 +21,9 @@\n \n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.aggregations.AggregationInitializationException;\n import org.elasticsearch.search.aggregations.AggregatorBuilder;\n@@ -38,6 +34,7 @@\n import org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField;\n import org.elasticsearch.search.fetch.source.FetchSourceContext;\n import org.elasticsearch.search.highlight.HighlightBuilder;\n+import org.elasticsearch.search.sort.ScoreSortBuilder;\n import org.elasticsearch.search.sort.SortBuilder;\n import org.elasticsearch.search.sort.SortBuilders;\n import org.elasticsearch.search.sort.SortOrder;\n@@ -57,7 +54,7 @@ public class TopHitsAggregatorBuilder extends AggregatorBuilder<TopHitsAggregato\n private boolean explain = false;\n private boolean version = false;\n private boolean trackScores = false;\n- private List<BytesReference> sorts = null;\n+ private List<SortBuilder<?>> sorts = null;\n private HighlightBuilder highlightBuilder;\n private List<String> fieldNames;\n private List<String> fieldDataFields;\n@@ -119,6 +116,9 @@ public TopHitsAggregatorBuilder sort(String name, SortOrder order) {\n if (order == null) {\n throw new IllegalArgumentException(\"sort [order] must not be null: [\" + name + \"]\");\n }\n+ if (name.equals(ScoreSortBuilder.NAME)) {\n+ sort(SortBuilders.scoreSort().order(order));\n+ }\n sort(SortBuilders.fieldSort(name).order(order));\n return this;\n }\n@@ -133,46 +133,38 @@ public TopHitsAggregatorBuilder sort(String name) {\n if (name == null) {\n throw new IllegalArgumentException(\"sort [name] must not be null: [\" + name + \"]\");\n }\n+ if (name.equals(ScoreSortBuilder.NAME)) {\n+ sort(SortBuilders.scoreSort());\n+ }\n sort(SortBuilders.fieldSort(name));\n return this;\n }\n \n /**\n * Adds a sort builder.\n */\n- public TopHitsAggregatorBuilder sort(SortBuilder sort) {\n+ public TopHitsAggregatorBuilder sort(SortBuilder<?> sort) {\n if (sort == null) {\n throw new IllegalArgumentException(\"[sort] must not be null: [\" + name + \"]\");\n }\n- try {\n- if (sorts == null) {\n+ if (sorts == null) {\n sorts = new ArrayList<>();\n- }\n- // NORELEASE when sort has been refactored and made writeable\n- // add the sortBuilcer to the List directly instead of\n- // serialising to XContent\n- XContentBuilder builder = XContentFactory.jsonBuilder();\n- builder.startObject();\n- sort.toXContent(builder, EMPTY_PARAMS);\n- builder.endObject();\n- sorts.add(builder.bytes());\n- } catch (IOException e) {\n- throw new RuntimeException(e);\n }\n+ sorts.add(sort);\n return this;\n }\n \n /**\n * Adds a sort builder.\n */\n- public TopHitsAggregatorBuilder sorts(List<BytesReference> sorts) {\n+ public TopHitsAggregatorBuilder sorts(List<SortBuilder<?>> sorts) {\n if (sorts == null) {\n throw new IllegalArgumentException(\"[sorts] must not be null: [\" + name + \"]\");\n }\n if (this.sorts == null) {\n this.sorts = new ArrayList<>();\n }\n- for (BytesReference sort : sorts) {\n+ for (SortBuilder<?> sort : sorts) {\n this.sorts.add(sort);\n }\n return this;\n@@ -181,7 +173,7 @@ public TopHitsAggregatorBuilder sorts(List<BytesReference> sorts) {\n /**\n * Gets the bytes representing the sort builders for this request.\n */\n- public List<BytesReference> sorts() {\n+ public List<SortBuilder<?>> sorts() {\n return sorts;\n }\n \n@@ -509,10 +501,8 @@ protected XContentBuilder internalXContent(XContentBuilder builder, Params param\n }\n if (sorts != null) {\n builder.startArray(SearchSourceBuilder.SORT_FIELD.getPreferredName());\n- for (BytesReference sort : sorts) {\n- XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(sort);\n- parser.nextToken();\n- builder.copyCurrentStructure(parser);\n+ for (SortBuilder<?> sort : sorts) {\n+ sort.toXContent(builder, params);\n }\n builder.endArray();\n }\n@@ -562,9 +552,9 @@ protected TopHitsAggregatorBuilder doReadFrom(String name, StreamInput in) throw\n factory.size = in.readVInt();\n if (in.readBoolean()) {\n int size = in.readVInt();\n- List<BytesReference> sorts = new ArrayList<>();\n+ List<SortBuilder<?>> sorts = new ArrayList<>();\n for (int i = 0; i < size; i++) {\n- sorts.add(in.readBytesReference());\n+ sorts.add(in.readSortBuilder());\n }\n factory.sorts = sorts;\n }\n@@ -612,8 +602,8 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n out.writeBoolean(hasSorts);\n if (hasSorts) {\n out.writeVInt(sorts.size());\n- for (BytesReference sort : sorts) {\n- out.writeBytesReference(sort);\n+ for (SortBuilder<?> sort : sorts) {\n+ out.writeSortBuilder(sort);\n }\n }\n out.writeBoolean(trackScores);", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorBuilder.java", "status": "modified" }, { "diff": "@@ -19,12 +19,7 @@\n \n package org.elasticsearch.search.aggregations.metrics.tophits;\n \n-import org.elasticsearch.common.ParsingException;\n-import org.elasticsearch.common.bytes.BytesReference;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.XContentLocation;\n-import org.elasticsearch.common.xcontent.XContentParser;\n+import org.apache.lucene.search.Sort;\n import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.SearchScript;\n import org.elasticsearch.search.aggregations.Aggregator;\n@@ -35,35 +30,35 @@\n import org.elasticsearch.search.aggregations.support.AggregationContext;\n import org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField;\n import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsContext;\n-import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsFetchSubPhase;\n import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsContext.FieldDataField;\n+import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsFetchSubPhase;\n import org.elasticsearch.search.fetch.source.FetchSourceContext;\n import org.elasticsearch.search.highlight.HighlightBuilder;\n import org.elasticsearch.search.internal.SubSearchContext;\n-import org.elasticsearch.search.sort.SortParseElement;\n+import org.elasticsearch.search.sort.SortBuilder;\n \n import java.io.IOException;\n import java.util.Collections;\n import java.util.List;\n import java.util.Map;\n+import java.util.Optional;\n \n public class TopHitsAggregatorFactory extends AggregatorFactory<TopHitsAggregatorFactory> {\n \n- private static final SortParseElement sortParseElement = new SortParseElement();\n private final int from;\n private final int size;\n private final boolean explain;\n private final boolean version;\n private final boolean trackScores;\n- private final List<BytesReference> sorts;\n+ private final List<SortBuilder<?>> sorts;\n private final HighlightBuilder highlightBuilder;\n private final List<String> fieldNames;\n private final List<String> fieldDataFields;\n private final List<ScriptField> scriptFields;\n private final FetchSourceContext fetchSourceContext;\n \n public TopHitsAggregatorFactory(String name, Type type, int from, int size, boolean explain, boolean version, boolean trackScores,\n- List<BytesReference> sorts, HighlightBuilder highlightBuilder, List<String> fieldNames, List<String> fieldDataFields,\n+ List<SortBuilder<?>> sorts, HighlightBuilder highlightBuilder, List<String> fieldNames, List<String> fieldDataFields,\n List<ScriptField> scriptFields, FetchSourceContext fetchSourceContext, AggregationContext context, AggregatorFactory<?> parent,\n AggregatorFactories.Builder subFactories, Map<String, Object> metaData) throws IOException {\n super(name, type, context, parent, subFactories, metaData);\n@@ -90,27 +85,9 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu\n subSearchContext.from(from);\n subSearchContext.size(size);\n if (sorts != null) {\n- XContentParser completeSortParser = null;\n- try {\n- XContentBuilder completeSortBuilder = XContentFactory.jsonBuilder();\n- completeSortBuilder.startObject();\n- completeSortBuilder.startArray(\"sort\");\n- for (BytesReference sort : sorts) {\n- XContentParser parser = XContentFactory.xContent(sort).createParser(sort);\n- parser.nextToken();\n- completeSortBuilder.copyCurrentStructure(parser);\n- }\n- completeSortBuilder.endArray();\n- completeSortBuilder.endObject();\n- BytesReference completeSortBytes = completeSortBuilder.bytes();\n- completeSortParser = XContentFactory.xContent(completeSortBytes).createParser(completeSortBytes);\n- completeSortParser.nextToken();\n- completeSortParser.nextToken();\n- completeSortParser.nextToken();\n- sortParseElement.parse(completeSortParser, subSearchContext);\n- } catch (Exception e) {\n- XContentLocation location = completeSortParser != null ? completeSortParser.getTokenLocation() : null;\n- throw new ParsingException(location, \"failed to parse sort source in aggregation [\" + name + \"]\", e);\n+ Optional<Sort> optionalSort = SortBuilder.buildSort(sorts, subSearchContext.getQueryShardContext());\n+ if (optionalSort.isPresent()) {\n+ subSearchContext.sort(optionalSort.get());\n }\n }\n if (fieldNames != null) {", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java", "status": "modified" }, { "diff": "@@ -19,9 +19,6 @@\n package org.elasticsearch.search.aggregations.metrics.tophits;\n \n import org.elasticsearch.common.ParsingException;\n-import org.elasticsearch.common.bytes.BytesReference;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.script.Script;\n@@ -30,6 +27,8 @@\n import org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField;\n import org.elasticsearch.search.fetch.source.FetchSourceContext;\n import org.elasticsearch.search.highlight.HighlightBuilder;\n+import org.elasticsearch.search.sort.SortBuilder;\n+\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n@@ -124,9 +123,7 @@ public TopHitsAggregatorBuilder parse(String aggregationName, XContentParser par\n } else if (context.parseFieldMatcher().match(currentFieldName, SearchSourceBuilder.HIGHLIGHT_FIELD)) {\n factory.highlighter(HighlightBuilder.PROTOTYPE.fromXContent(context));\n } else if (context.parseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SORT_FIELD)) {\n- List<BytesReference> sorts = new ArrayList<>();\n- XContentBuilder xContentBuilder = XContentFactory.jsonBuilder().copyCurrentStructure(parser);\n- sorts.add(xContentBuilder.bytes());\n+ List<SortBuilder<?>> sorts = SortBuilder.fromXContent(context);\n factory.sorts(sorts);\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"Unknown key for a \" + token + \" in [\" + currentFieldName + \"].\",\n@@ -157,11 +154,7 @@ public TopHitsAggregatorBuilder parse(String aggregationName, XContentParser par\n }\n factory.fieldDataFields(fieldDataFields);\n } else if (context.parseFieldMatcher().match(currentFieldName, SearchSourceBuilder.SORT_FIELD)) {\n- List<BytesReference> sorts = new ArrayList<>();\n- while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n- XContentBuilder xContentBuilder = XContentFactory.jsonBuilder().copyCurrentStructure(parser);\n- sorts.add(xContentBuilder.bytes());\n- }\n+ List<SortBuilder<?>> sorts = SortBuilder.fromXContent(context);\n factory.sorts(sorts);\n } else if (context.parseFieldMatcher().match(currentFieldName, SearchSourceBuilder._SOURCE_FIELD)) {\n factory.fetchSource(FetchSourceContext.parse(parser, context));", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsParser.java", "status": "modified" }, { "diff": "@@ -52,6 +52,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.rescore.RescoreBuilder;\n import org.elasticsearch.search.searchafter.SearchAfterBuilder;\n+import org.elasticsearch.search.sort.ScoreSortBuilder;\n import org.elasticsearch.search.sort.SortBuilder;\n import org.elasticsearch.search.sort.SortBuilders;\n import org.elasticsearch.search.sort.SortOrder;\n@@ -139,7 +140,7 @@ public static HighlightBuilder highlight() {\n \n private Boolean version;\n \n- private List<BytesReference> sorts;\n+ private List<SortBuilder<?>> sorts;\n \n private boolean trackScores = false;\n \n@@ -336,6 +337,9 @@ public int terminateAfter() {\n * The sort ordering\n */\n public SearchSourceBuilder sort(String name, SortOrder order) {\n+ if (name.equals(ScoreSortBuilder.NAME)) {\n+ return sort(SortBuilders.scoreSort().order(order));\n+ }\n return sort(SortBuilders.fieldSort(name).order(order));\n }\n \n@@ -346,32 +350,27 @@ public SearchSourceBuilder sort(String name, SortOrder order) {\n * The name of the field to sort by\n */\n public SearchSourceBuilder sort(String name) {\n+ if (name.equals(ScoreSortBuilder.NAME)) {\n+ return sort(SortBuilders.scoreSort());\n+ }\n return sort(SortBuilders.fieldSort(name));\n }\n \n /**\n * Adds a sort builder.\n */\n- public SearchSourceBuilder sort(SortBuilder sort) {\n- try {\n+ public SearchSourceBuilder sort(SortBuilder<?> sort) {\n if (sorts == null) {\n sorts = new ArrayList<>();\n }\n- XContentBuilder builder = XContentFactory.jsonBuilder();\n- builder.startObject();\n- sort.toXContent(builder, EMPTY_PARAMS);\n- builder.endObject();\n- sorts.add(builder.bytes());\n+ sorts.add(sort);\n return this;\n- } catch (IOException e) {\n- throw new RuntimeException(e);\n- }\n }\n \n /**\n * Gets the bytes representing the sort builders for this request.\n */\n- public List<BytesReference> sorts() {\n+ public List<SortBuilder<?>> sorts() {\n return sorts;\n }\n \n@@ -907,9 +906,7 @@ public void parseXContent(XContentParser parser, QueryParseContext context, Aggr\n } else if (context.parseFieldMatcher().match(currentFieldName, SUGGEST_FIELD)) {\n suggestBuilder = SuggestBuilder.fromXContent(context, suggesters);\n } else if (context.parseFieldMatcher().match(currentFieldName, SORT_FIELD)) {\n- sorts = new ArrayList<>();\n- XContentBuilder xContentBuilder = XContentFactory.jsonBuilder().copyCurrentStructure(parser);\n- sorts.add(xContentBuilder.bytes());\n+ sorts = new ArrayList<>(SortBuilder.fromXContent(context));\n } else if (context.parseFieldMatcher().match(currentFieldName, EXT_FIELD)) {\n XContentBuilder xContentBuilder = XContentFactory.jsonBuilder().copyCurrentStructure(parser);\n ext = xContentBuilder.bytes();\n@@ -940,11 +937,7 @@ public void parseXContent(XContentParser parser, QueryParseContext context, Aggr\n }\n }\n } else if (context.parseFieldMatcher().match(currentFieldName, SORT_FIELD)) {\n- sorts = new ArrayList<>();\n- while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n- XContentBuilder xContentBuilder = XContentFactory.jsonBuilder().copyCurrentStructure(parser);\n- sorts.add(xContentBuilder.bytes());\n- }\n+ sorts = new ArrayList<>(SortBuilder.fromXContent(context));\n } else if (context.parseFieldMatcher().match(currentFieldName, RESCORE_FIELD)) {\n rescoreBuilders = new ArrayList<>();\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n@@ -1057,10 +1050,8 @@ public void innerToXContent(XContentBuilder builder, Params params) throws IOExc\n \n if (sorts != null) {\n builder.startArray(SORT_FIELD.getPreferredName());\n- for (BytesReference sort : sorts) {\n- XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(sort);\n- parser.nextToken();\n- builder.copyCurrentStructure(parser);\n+ for (SortBuilder<?> sort : sorts) {\n+ sort.toXContent(builder, params);\n }\n builder.endArray();\n }\n@@ -1266,9 +1257,9 @@ public SearchSourceBuilder readFrom(StreamInput in) throws IOException {\n builder.size = in.readVInt();\n if (in.readBoolean()) {\n int size = in.readVInt();\n- List<BytesReference> sorts = new ArrayList<>();\n+ List<SortBuilder<?>> sorts = new ArrayList<>();\n for (int i = 0; i < size; i++) {\n- sorts.add(in.readBytesReference());\n+ sorts.add(in.readSortBuilder());\n }\n builder.sorts = sorts;\n }\n@@ -1382,8 +1373,8 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBoolean(hasSorts);\n if (hasSorts) {\n out.writeVInt(sorts.size());\n- for (BytesReference sort : sorts) {\n- out.writeBytesReference(sort);\n+ for (SortBuilder<?> sort : sorts) {\n+ out.writeSortBuilder(sort);\n }\n }\n boolean hasStats = stats != null;", "filename": "core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java", "status": "modified" }, { "diff": "@@ -35,7 +35,6 @@\n import org.elasticsearch.search.internal.InternalSearchHit;\n import org.elasticsearch.search.internal.InternalSearchHits;\n import org.elasticsearch.search.internal.SearchContext;\n-import org.elasticsearch.search.sort.SortParseElement;\n \n import java.io.IOException;\n import java.util.HashMap;\n@@ -51,8 +50,8 @@ public class InnerHitsFetchSubPhase implements FetchSubPhase {\n private FetchPhase fetchPhase;\n \n @Inject\n- public InnerHitsFetchSubPhase(SortParseElement sortParseElement, FetchSourceParseElement sourceParseElement, HighlighterParseElement highlighterParseElement, FieldDataFieldsParseElement fieldDataFieldsParseElement, ScriptFieldsParseElement scriptFieldsParseElement) {\n- parseElements = singletonMap(\"inner_hits\", new InnerHitsParseElement(sortParseElement, sourceParseElement, highlighterParseElement,\n+ public InnerHitsFetchSubPhase(FetchSourceParseElement sourceParseElement, HighlighterParseElement highlighterParseElement, FieldDataFieldsParseElement fieldDataFieldsParseElement, ScriptFieldsParseElement scriptFieldsParseElement) {\n+ parseElements = singletonMap(\"inner_hits\", new InnerHitsParseElement(sourceParseElement, highlighterParseElement,\n fieldDataFieldsParseElement, scriptFieldsParseElement));\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/fetch/innerhits/InnerHitsFetchSubPhase.java", "status": "modified" }, { "diff": "@@ -32,7 +32,6 @@\n import org.elasticsearch.search.highlight.HighlighterParseElement;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.internal.SubSearchContext;\n-import org.elasticsearch.search.sort.SortParseElement;\n \n import java.util.HashMap;\n import java.util.Map;\n@@ -43,14 +42,12 @@\n */\n public class InnerHitsParseElement implements SearchParseElement {\n \n- private final SortParseElement sortParseElement;\n private final FetchSourceParseElement sourceParseElement;\n private final HighlighterParseElement highlighterParseElement;\n private final FieldDataFieldsParseElement fieldDataFieldsParseElement;\n private final ScriptFieldsParseElement scriptFieldsParseElement;\n \n- public InnerHitsParseElement(SortParseElement sortParseElement, FetchSourceParseElement sourceParseElement, HighlighterParseElement highlighterParseElement, FieldDataFieldsParseElement fieldDataFieldsParseElement, ScriptFieldsParseElement scriptFieldsParseElement) {\n- this.sortParseElement = sortParseElement;\n+ public InnerHitsParseElement(FetchSourceParseElement sourceParseElement, HighlighterParseElement highlighterParseElement, FieldDataFieldsParseElement fieldDataFieldsParseElement, ScriptFieldsParseElement scriptFieldsParseElement) {\n this.sourceParseElement = sourceParseElement;\n this.highlighterParseElement = highlighterParseElement;\n this.fieldDataFieldsParseElement = fieldDataFieldsParseElement;\n@@ -184,10 +181,10 @@ private ParseResult parseSubSearchContext(SearchContext searchContext, QueryShar\n } else if (\"inner_hits\".equals(fieldName)) {\n childInnerHits = parseInnerHits(parser, context, searchContext);\n } else {\n- parseCommonInnerHitOptions(parser, token, fieldName, subSearchContext, sortParseElement, sourceParseElement, highlighterParseElement, scriptFieldsParseElement, fieldDataFieldsParseElement);\n+ parseCommonInnerHitOptions(parser, token, fieldName, subSearchContext, sourceParseElement, highlighterParseElement, scriptFieldsParseElement, fieldDataFieldsParseElement);\n }\n } else {\n- parseCommonInnerHitOptions(parser, token, fieldName, subSearchContext, sortParseElement, sourceParseElement, highlighterParseElement, scriptFieldsParseElement, fieldDataFieldsParseElement);\n+ parseCommonInnerHitOptions(parser, token, fieldName, subSearchContext, sourceParseElement, highlighterParseElement, scriptFieldsParseElement, fieldDataFieldsParseElement);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/fetch/innerhits/InnerHitsParseElement.java", "status": "modified" }, { "diff": "@@ -58,7 +58,6 @@\n import org.elasticsearch.search.profile.Profiler;\n import org.elasticsearch.search.rescore.RescorePhase;\n import org.elasticsearch.search.rescore.RescoreSearchContext;\n-import org.elasticsearch.search.sort.SortParseElement;\n import org.elasticsearch.search.sort.TrackScoresParseElement;\n import org.elasticsearch.search.suggest.SuggestPhase;\n \n@@ -98,7 +97,6 @@ public QueryPhase(AggregationPhase aggregationPhase, SuggestPhase suggestPhase,\n parseElements.put(\"query\", new QueryParseElement());\n parseElements.put(\"post_filter\", new PostFilterParseElement());\n parseElements.put(\"postFilter\", new PostFilterParseElement());\n- parseElements.put(\"sort\", new SortParseElement());\n parseElements.put(\"trackScores\", new TrackScoresParseElement());\n parseElements.put(\"track_scores\", new TrackScoresParseElement());\n parseElements.put(\"min_score\", new MinScoreParseElement());", "filename": "core/src/main/java/org/elasticsearch/search/query/QueryPhase.java", "status": "modified" }, { "diff": "@@ -20,12 +20,10 @@\n package org.elasticsearch.search.sort;\n \n import org.apache.lucene.search.SortField;\n-import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.common.lucene.BytesRefs;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n@@ -44,7 +42,7 @@\n * A sort builder to sort based on a document field.\n */\n public class FieldSortBuilder extends SortBuilder<FieldSortBuilder> {\n- static final FieldSortBuilder PROTOTYPE = new FieldSortBuilder(\"\");\n+ public static final FieldSortBuilder PROTOTYPE = new FieldSortBuilder(\"_na_\");\n public static final String NAME = \"field_sort\";\n public static final ParseField NESTED_PATH = new ParseField(\"nested_path\");\n public static final ParseField NESTED_FILTER = new ParseField(\"nested_filter\");\n@@ -109,19 +107,12 @@ public String getFieldName() {\n * <tt>_first</tt> to sort missing last or first respectively.\n */\n public FieldSortBuilder missing(Object missing) {\n- if (missing instanceof String) {\n- this.missing = BytesRefs.toBytesRef(missing);\n- } else {\n- this.missing = missing;\n- }\n+ this.missing = missing;\n return this;\n }\n \n /** Returns the value used when a field is missing in a doc. */\n public Object missing() {\n- if (missing instanceof BytesRef) {\n- return ((BytesRef) missing).utf8ToString();\n- }\n return missing;\n }\n \n@@ -208,14 +199,11 @@ public String getNestedPath() {\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ builder.startObject();\n builder.startObject(fieldName);\n builder.field(ORDER_FIELD.getPreferredName(), order);\n if (missing != null) {\n- if (missing instanceof BytesRef) {\n- builder.field(MISSING.getPreferredName(), ((BytesRef) missing).utf8ToString());\n- } else {\n- builder.field(MISSING.getPreferredName(), missing);\n- }\n+ builder.field(MISSING.getPreferredName(), missing);\n }\n if (unmappedType != null) {\n builder.field(UNMAPPED_TYPE.getPreferredName(), unmappedType);\n@@ -230,6 +218,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(NESTED_PATH.getPreferredName(), nestedPath);\n }\n builder.endObject();\n+ builder.endObject();\n return builder;\n }\n \n@@ -376,7 +365,7 @@ public FieldSortBuilder fromXContent(QueryParseContext context, String fieldName\n if (context.parseFieldMatcher().match(currentFieldName, NESTED_PATH)) {\n nestedPath = parser.text();\n } else if (context.parseFieldMatcher().match(currentFieldName, MISSING)) {\n- missing = parser.objectBytes();\n+ missing = parser.objectText();\n } else if (context.parseFieldMatcher().match(currentFieldName, REVERSE)) {\n if (parser.booleanValue()) {\n order = SortOrder.DESC;", "filename": "core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java", "status": "modified" }, { "diff": "@@ -74,7 +74,7 @@ public class GeoDistanceSortBuilder extends SortBuilder<GeoDistanceSortBuilder>\n public static final ParseField NESTED_PATH_FIELD = new ParseField(\"nested_path\");\n public static final ParseField NESTED_FILTER_FIELD = new ParseField(\"nested_filter\");\n \n- static final GeoDistanceSortBuilder PROTOTYPE = new GeoDistanceSortBuilder(\"\", -1, -1);\n+ public static final GeoDistanceSortBuilder PROTOTYPE = new GeoDistanceSortBuilder(\"_na_\", -1, -1);\n \n private final String fieldName;\n private final List<GeoPoint> points = new ArrayList<>();\n@@ -300,6 +300,7 @@ public boolean ignoreMalformed() {\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ builder.startObject();\n builder.startObject(NAME);\n \n builder.startArray(fieldName);\n@@ -325,6 +326,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(COERCE_FIELD.getPreferredName(), coerce);\n builder.field(IGNORE_MALFORMED_FIELD.getPreferredName(), ignoreMalformed);\n \n+ builder.endObject();\n builder.endObject();\n return builder;\n }", "filename": "core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java", "status": "modified" }, { "diff": "@@ -39,7 +39,7 @@\n public class ScoreSortBuilder extends SortBuilder<ScoreSortBuilder> {\n \n public static final String NAME = \"_score\";\n- static final ScoreSortBuilder PROTOTYPE = new ScoreSortBuilder();\n+ public static final ScoreSortBuilder PROTOTYPE = new ScoreSortBuilder();\n public static final ParseField REVERSE_FIELD = new ParseField(\"reverse\");\n public static final ParseField ORDER_FIELD = new ParseField(\"order\");\n private static final SortField SORT_SCORE = new SortField(null, SortField.Type.SCORE);\n@@ -53,9 +53,11 @@ public ScoreSortBuilder() {\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ builder.startObject();\n builder.startObject(NAME);\n builder.field(ORDER_FIELD.getPreferredName(), order);\n builder.endObject();\n+ builder.endObject();\n return builder;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@\n public class ScriptSortBuilder extends SortBuilder<ScriptSortBuilder> {\n \n public static final String NAME = \"_script\";\n- static final ScriptSortBuilder PROTOTYPE = new ScriptSortBuilder(new Script(\"_na_\"), ScriptSortType.STRING);\n+ public static final ScriptSortBuilder PROTOTYPE = new ScriptSortBuilder(new Script(\"_na_\"), ScriptSortType.STRING);\n public static final ParseField TYPE_FIELD = new ParseField(\"type\");\n public static final ParseField SCRIPT_FIELD = new ParseField(\"script\");\n public static final ParseField SORTMODE_FIELD = new ParseField(\"mode\");\n@@ -179,6 +179,7 @@ public String getNestedPath() {\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) throws IOException {\n+ builder.startObject();\n builder.startObject(NAME);\n builder.field(SCRIPT_FIELD.getPreferredName(), script);\n builder.field(TYPE_FIELD.getPreferredName(), type);\n@@ -193,6 +194,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params builderParams)\n builder.field(NESTED_FILTER_FIELD.getPreferredName(), nestedFilter, builderParams);\n }\n builder.endObject();\n+ builder.endObject();\n return builder;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.sort;\n \n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.Sort;\n import org.apache.lucene.search.SortField;\n import org.apache.lucene.search.join.BitSetProducer;\n import org.elasticsearch.action.support.ToXContentToBytes;\n@@ -34,13 +35,15 @@\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.QueryShardException;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n import java.util.Objects;\n+import java.util.Optional;\n \n import static java.util.Collections.unmodifiableMap;\n \n@@ -157,6 +160,41 @@ private static void parseCompoundSortField(XContentParser parser, QueryParseCont\n }\n }\n \n+ public static void parseSort(XContentParser parser, SearchContext context) throws IOException {\n+ QueryParseContext parseContext = context.getQueryShardContext().parseContext();\n+ parseContext.reset(parser);\n+ Optional<Sort> sortOptional = buildSort(SortBuilder.fromXContent(parseContext), context.getQueryShardContext());\n+ if (sortOptional.isPresent()) {\n+ context.sort(sortOptional.get());\n+ }\n+ }\n+\n+ public static Optional<Sort> buildSort(List<SortBuilder<?>> sortBuilders, QueryShardContext context) throws IOException {\n+ List<SortField> sortFields = new ArrayList<>(sortBuilders.size());\n+ for (SortBuilder<?> builder : sortBuilders) {\n+ sortFields.add(builder.build(context));\n+ }\n+ if (!sortFields.isEmpty()) {\n+ // optimize if we just sort on score non reversed, we don't really\n+ // need sorting\n+ boolean sort;\n+ if (sortFields.size() > 1) {\n+ sort = true;\n+ } else {\n+ SortField sortField = sortFields.get(0);\n+ if (sortField.getType() == SortField.Type.SCORE && !sortField.getReverse()) {\n+ sort = false;\n+ } else {\n+ sort = true;\n+ }\n+ }\n+ if (sort) {\n+ return Optional.of(new Sort(sortFields.toArray(new SortField[sortFields.size()])));\n+ }\n+ }\n+ return Optional.empty();\n+ }\n+\n protected static Nested resolveNested(QueryShardContext context, String nestedPath, QueryBuilder<?> nestedFilter) throws IOException {\n Nested nested = null;\n if (nestedPath != null) {", "filename": "core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java", "status": "modified" }, { "diff": "@@ -74,6 +74,8 @@\n import org.elasticsearch.search.highlight.HighlightBuilderTests;\n import org.elasticsearch.search.rescore.QueryRescoreBuilderTests;\n import org.elasticsearch.search.searchafter.SearchAfterBuilder;\n+import org.elasticsearch.search.sort.FieldSortBuilder;\n+import org.elasticsearch.search.sort.ScoreSortBuilder;\n import org.elasticsearch.search.sort.ScriptSortBuilder.ScriptSortType;\n import org.elasticsearch.search.sort.SortBuilders;\n import org.elasticsearch.search.sort.SortOrder;\n@@ -551,7 +553,7 @@ public void testParseSort() throws IOException {\n SearchSourceBuilder searchSourceBuilder = SearchSourceBuilder.parseSearchSource(parser, createParseContext(parser),\n aggParsers, suggesters);\n assertEquals(1, searchSourceBuilder.sorts().size());\n- assertEquals(\"{\\\"foo\\\":{\\\"order\\\":\\\"asc\\\"}}\", searchSourceBuilder.sorts().get(0).toUtf8());\n+ assertEquals(new FieldSortBuilder(\"foo\"), searchSourceBuilder.sorts().get(0));\n }\n }\n \n@@ -567,11 +569,11 @@ public void testParseSort() throws IOException {\n SearchSourceBuilder searchSourceBuilder = SearchSourceBuilder.parseSearchSource(parser, createParseContext(parser),\n aggParsers, suggesters);\n assertEquals(5, searchSourceBuilder.sorts().size());\n- assertEquals(\"{\\\"post_date\\\":{\\\"order\\\":\\\"asc\\\"}}\", searchSourceBuilder.sorts().get(0).toUtf8());\n- assertEquals(\"\\\"user\\\"\", searchSourceBuilder.sorts().get(1).toUtf8());\n- assertEquals(\"{\\\"name\\\":\\\"desc\\\"}\", searchSourceBuilder.sorts().get(2).toUtf8());\n- assertEquals(\"{\\\"age\\\":\\\"desc\\\"}\", searchSourceBuilder.sorts().get(3).toUtf8());\n- assertEquals(\"\\\"_score\\\"\", searchSourceBuilder.sorts().get(4).toUtf8());\n+ assertEquals(new FieldSortBuilder(\"post_date\"), searchSourceBuilder.sorts().get(0));\n+ assertEquals(new FieldSortBuilder(\"user\"), searchSourceBuilder.sorts().get(1));\n+ assertEquals(new FieldSortBuilder(\"name\").order(SortOrder.DESC), searchSourceBuilder.sorts().get(2));\n+ assertEquals(new FieldSortBuilder(\"age\").order(SortOrder.DESC), searchSourceBuilder.sorts().get(3));\n+ assertEquals(new ScoreSortBuilder(), searchSourceBuilder.sorts().get(4));\n }\n }\n }", "filename": "core/src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTests.java", "status": "modified" }, { "diff": "@@ -66,7 +66,6 @@\n import java.io.IOException;\n import java.nio.file.Path;\n import java.util.Collections;\n-import java.util.List;\n import java.util.Map;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -78,7 +77,6 @@ public abstract class AbstractSortTestCase<T extends SortBuilder<T>> extends EST\n \n private static final int NUMBER_OF_TESTBUILDERS = 20;\n static IndicesQueriesRegistry indicesQueriesRegistry;\n- private static SortParseElement parseElement = new SortParseElement();\n private static ScriptService scriptService;\n \n @BeforeClass\n@@ -131,10 +129,7 @@ public void testFromXContent() throws IOException {\n if (randomBoolean()) {\n builder.prettyPrint();\n }\n- builder.startObject();\n testItem.toXContent(builder, ToXContent.EMPTY_PARAMS);\n- builder.endObject();\n-\n XContentParser itemParser = XContentHelper.createParser(builder.bytes());\n itemParser.nextToken();\n \n@@ -163,24 +158,12 @@ public void testBuildSortField() throws IOException {\n for (int runs = 0; runs < NUMBER_OF_TESTBUILDERS; runs++) {\n T sortBuilder = createTestItem();\n SortField sortField = sortBuilder.build(mockShardContext);\n- XContentBuilder builder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));\n- if (randomBoolean()) {\n- builder.prettyPrint();\n- }\n- builder.startObject();\n- sortBuilder.toXContent(builder, ToXContent.EMPTY_PARAMS);\n- builder.endObject();\n- XContentParser parser = XContentHelper.createParser(builder.bytes());\n- parser.nextToken();\n- List<SortField> sortFields = parseElement.parse(parser, mockShardContext);\n- assertEquals(1, sortFields.size());\n- SortField sortFieldOldStyle = sortFields.get(0);\n- assertEquals(sortFieldOldStyle.getField(), sortField.getField());\n- assertEquals(sortFieldOldStyle.getReverse(), sortField.getReverse());\n- assertEquals(sortFieldOldStyle.getType(), sortField.getType());\n+ sortFieldAssertions(sortBuilder, sortField);\n }\n }\n \n+ protected abstract void sortFieldAssertions(T builder, SortField sortField) throws IOException;\n+\n /**\n * Test serialization and deserialization of the test sort.\n */", "filename": "core/src/test/java/org/elasticsearch/search/sort/AbstractSortTestCase.java", "status": "modified" }, { "diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.search.sort;\n \n+import org.apache.lucene.search.SortField;\n+\n import java.io.IOException;\n \n public class FieldSortBuilderTests extends AbstractSortTestCase<FieldSortBuilder> {\n@@ -29,7 +31,7 @@ protected FieldSortBuilder createTestItem() {\n }\n \n public static FieldSortBuilder randomFieldSortBuilder() {\n- String fieldName = rarely() ? SortParseElement.DOC_FIELD_NAME : randomAsciiOfLengthBetween(1, 10);\n+ String fieldName = rarely() ? FieldSortBuilder.DOC_FIELD_NAME : randomAsciiOfLengthBetween(1, 10);\n FieldSortBuilder builder = new FieldSortBuilder(fieldName);\n if (randomBoolean()) {\n builder.order(RandomSortDataGenerator.order(null));\n@@ -86,4 +88,19 @@ protected FieldSortBuilder mutate(FieldSortBuilder original) throws IOException\n }\n return mutated;\n }\n+\n+ @Override\n+ protected void sortFieldAssertions(FieldSortBuilder builder, SortField sortField) throws IOException {\n+ SortField.Type expectedType;\n+ if (builder.getFieldName().equals(FieldSortBuilder.DOC_FIELD_NAME)) {\n+ expectedType = SortField.Type.DOC;\n+ } else {\n+ expectedType = SortField.Type.CUSTOM;\n+ }\n+ assertEquals(expectedType, sortField.getType());\n+ assertEquals(builder.order() == SortOrder.ASC ? false : true, sortField.getReverse());\n+ if (expectedType == SortField.Type.CUSTOM) {\n+ assertEquals(builder.getFieldName(), sortField.getField());\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/sort/FieldSortBuilderTests.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.sort;\n \n \n+import org.apache.lucene.search.SortField;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.geo.GeoDistance;\n import org.elasticsearch.common.geo.GeoPoint;\n@@ -182,6 +183,13 @@ protected GeoDistanceSortBuilder mutate(GeoDistanceSortBuilder original) throws\n return result;\n }\n \n+ @Override\n+ protected void sortFieldAssertions(GeoDistanceSortBuilder builder, SortField sortField) throws IOException {\n+ assertEquals(SortField.Type.CUSTOM, sortField.getType());\n+ assertEquals(builder.order() == SortOrder.ASC ? false : true, sortField.getReverse());\n+ assertEquals(builder.fieldName(), sortField.getField());\n+ }\n+\n public void testSortModeSumIsRejectedInSetter() {\n GeoDistanceSortBuilder builder = new GeoDistanceSortBuilder(\"testname\", -1, -1);\n GeoPoint point = RandomGeoGenerator.randomPoint(getRandom());", "filename": "core/src/test/java/org/elasticsearch/search/sort/GeoDistanceSortBuilderTests.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.sort;\n \n \n+import org.apache.lucene.search.SortField;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -79,4 +80,10 @@ public void testParseOrder() throws IOException {\n ScoreSortBuilder scoreSort = ScoreSortBuilder.PROTOTYPE.fromXContent(context, \"_score\");\n assertEquals(order, scoreSort.order());\n }\n+\n+ @Override\n+ protected void sortFieldAssertions(ScoreSortBuilder builder, SortField sortField) {\n+ assertEquals(SortField.Type.SCORE, sortField.getType());\n+ assertEquals(builder.order() == SortOrder.DESC ? false : true, sortField.getReverse());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/sort/ScoreSortBuilderTests.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.sort;\n \n \n+import org.apache.lucene.search.SortField;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.settings.Settings;\n@@ -121,8 +122,11 @@ protected ScriptSortBuilder mutate(ScriptSortBuilder original) throws IOExceptio\n return result;\n }\n \n- @Rule\n- public ExpectedException exceptionRule = ExpectedException.none();\n+ @Override\n+ protected void sortFieldAssertions(ScriptSortBuilder builder, SortField sortField) throws IOException {\n+ assertEquals(SortField.Type.CUSTOM, sortField.getType());\n+ assertEquals(builder.order() == SortOrder.ASC ? false : true, sortField.getReverse());\n+ }\n \n public void testScriptSortType() {\n // we rely on these ordinals in serialization, so changing them breaks bwc.\n@@ -140,6 +144,9 @@ public void testScriptSortType() {\n assertEquals(ScriptSortType.NUMBER, ScriptSortType.fromString(\"NUMBER\"));\n }\n \n+ @Rule\n+ public ExpectedException exceptionRule = ExpectedException.none();\n+\n public void testScriptSortTypeNull() {\n exceptionRule.expect(NullPointerException.class);\n exceptionRule.expectMessage(\"input string is null\");", "filename": "core/src/test/java/org/elasticsearch/search/sort/ScriptSortBuilderTests.java", "status": "modified" } ] }
{ "body": "After upgrading from 1.7 to 2.0 I have got a problem with my more_like_this query, which stopped returning results. \n\nPlaying around with the query I was surprised to know that if I set a \"like\" document \"_index\" to an alias, no results are returned. When set to the real index name matches are found.\n\n```\nGET /_cat/aliases?v&h=alias,index\n\nalias index \ntraits traits_1123 \nanalysis analysis_1123 \ncatalog catalog_1123 \n```\n\n```\nPOST /traits/library/_search\n{\n \"filter\": {\n \"query\": {\n \"more_like_this\": {\n \"min_doc_freq\": 1, \n \"fields\": [\"data.packages\"], \n \"min_term_freq\": 1, \n \"docs\": [\n {\n \"_type\": \"packages\", \n \"_id\": \"1_401f94\", \n \"_index\": \"analysis\"\n }\n ] \n } \n }\n }\n}\n\n{\n \"took\": 12,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 0,\n \"max_score\": null,\n \"hits\": []\n }\n}\n```\n\n```\nPOST /traits_1123/library/_search\n{\n \"filter\": {\n \"query\": {\n \"more_like_this\": {\n \"min_doc_freq\": 1, \n \"fields\": [\"data.packages\"], \n \"min_term_freq\": 1, \n \"docs\": [\n {\n \"_type\": \"packages\", \n \"_id\": \"1_401f94\", \n \"_index\": \"analysis_1123\"\n }\n ] \n } \n }\n }\n}\n\n{\n \"took\": 23,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 12,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"traits_1123\",\n \"_type\": \"library\",\n...\n```\n", "comments": [ { "body": "Same here with v2.1.0. The \"more_like_this\" doesn't return values if an alias is provided.\n", "created_at": "2015-11-26T14:45:56Z" }, { "body": "I even looked at the latest code, but found nothing suspicious.. I am far from knowing the code well, but it seems that the semantics of getting the termvectors from the document is the same as in the `_mtermvectors` action, which does work with the same format of document specification, both when using alias and concrete name. We definitely need some attention from the elasticsearch source guru\n", "created_at": "2015-11-26T14:54:45Z" }, { "body": "Rolled back to 1.7 eventually :/\nHopefully this will get fixed soon\n", "created_at": "2015-12-01T09:28:05Z" }, { "body": "I might found the problem at hand, but I need some help to fix it. \n\nAs @leonid-s-usov pointed out, the problem occurs after the termvectors were fetched. \n[The responses from the **MultiTermVectorsRequest** are filtered for the given indices of the client call.](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java#L931)\nAs the responses contain the actual index and _not_ the alias name, every response gets filtered out. \n\nI think this check should also consider the alias of the indices. \nCan someone give me a hint, how to resolve this issue? I'm not sure, where to best fetch the information about the alias -> index mappings. \n\nThanks in advance. \n", "created_at": "2016-02-28T15:23:44Z" }, { "body": "Had a brief discussion with @jpountz who agrees that the decrease in performance is probably negligible. Backported to 2.x 2.3 and 2.2 (e4c20ab, 884ad25 and 4dcdd95). Thanks again @mstockerl !\n", "created_at": "2016-04-20T12:46:25Z" } ], "number": 14944, "title": "more_like_this doesn't understand index alias in doc reference" }
{ "body": "Get TermVectorResponses for like and unlike items in separate requests, so we don't have to validate responses afterwards.\n\nRelates to #14944\n", "number": 17204, "review_comments": [ { "body": "You can remove the `hasResponseFromRequest` method - it is not needed anymore. \n", "created_at": "2016-04-19T14:13:57Z" } ], "title": "Alias items are not ignored anymore" }
{ "commits": [ { "message": "Get TermVectorResponses for like and unlike items in separate requests, so we don't have to validate responses afterwards.\n\nRelates to #14944" }, { "message": "#14944 Remove obsolete hasResponseFromRequest method" } ], "files": [ { "diff": "@@ -861,14 +861,14 @@ private Query handleItems(QueryShardContext context, MoreLikeThisQuery mltQuery,\n }\n \n // fetching the items with multi-termvectors API\n- MultiTermVectorsResponse responses = fetchResponse(context.getClient(), likeItems, unlikeItems, SearchContext.current());\n-\n+ MultiTermVectorsResponse likeItemsResponse = fetchResponse(context.getClient(), likeItems);\n // getting the Fields for liked items\n- mltQuery.setLikeText(getFieldsFor(responses, likeItems));\n+ mltQuery.setLikeText(getFieldsFor(likeItemsResponse));\n \n // getting the Fields for unliked items\n if (unlikeItems.length > 0) {\n- org.apache.lucene.index.Fields[] unlikeFields = getFieldsFor(responses, unlikeItems);\n+ MultiTermVectorsResponse unlikeItemsResponse = fetchResponse(context.getClient(), unlikeItems);\n+ org.apache.lucene.index.Fields[] unlikeFields = getFieldsFor(unlikeItemsResponse);\n if (unlikeFields.length > 0) {\n mltQuery.setUnlikeText(unlikeFields);\n }\n@@ -907,30 +907,19 @@ private static void setDefaultIndexTypeFields(QueryShardContext context, Item it\n }\n }\n \n- private MultiTermVectorsResponse fetchResponse(Client client, Item[] likeItems, @Nullable Item[] unlikeItems,\n- SearchContext searchContext) throws IOException {\n+ private MultiTermVectorsResponse fetchResponse(Client client, Item[] items) throws IOException {\n MultiTermVectorsRequest request = new MultiTermVectorsRequest();\n- for (Item item : likeItems) {\n- request.add(item.toTermVectorsRequest());\n- }\n- for (Item item : unlikeItems) {\n+ for (Item item : items) {\n request.add(item.toTermVectorsRequest());\n }\n+\n return client.multiTermVectors(request).actionGet();\n }\n \n- private static Fields[] getFieldsFor(MultiTermVectorsResponse responses, Item[] items) throws IOException {\n+ private static Fields[] getFieldsFor(MultiTermVectorsResponse responses) throws IOException {\n List<Fields> likeFields = new ArrayList<>();\n \n- Set<Item> selectedItems = new HashSet<>();\n- for (Item request : items) {\n- selectedItems.add(new Item(request.index(), request.type(), request.id()));\n- }\n-\n for (MultiTermVectorsItemResponse response : responses) {\n- if (!hasResponseFromRequest(response, selectedItems)) {\n- continue;\n- }\n if (response.isFailed()) {\n continue;\n }\n@@ -943,10 +932,6 @@ private static Fields[] getFieldsFor(MultiTermVectorsResponse responses, Item[]\n return likeFields.toArray(Fields.EMPTY_ARRAY);\n }\n \n- private static boolean hasResponseFromRequest(MultiTermVectorsItemResponse response, Set<Item> selectedItems) {\n- return selectedItems.contains(new Item(response.getIndex(), response.getType(), response.getId()));\n- }\n-\n private static void handleExclude(BooleanQuery.Builder boolQuery, Item[] likeItems) {\n // artificial docs get assigned a random id and should be disregarded\n List<BytesRef> uids = new ArrayList<>();", "filename": "core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java", "status": "modified" }, { "diff": "@@ -145,6 +145,32 @@ public void testMoreLikeThisWithAliases() throws Exception {\n assertThat(response.getHits().getAt(0).id(), equalTo(\"3\"));\n }\n \n+ // Issue #14944\n+ public void testMoreLikeThisWithAliasesInLikeDocuments() throws Exception {\n+ String indexName = \"foo\";\n+ String aliasName = \"foo_name\";\n+ String typeName = \"bar\";\n+\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"bar\")\n+ .startObject(\"properties\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ client().admin().indices().prepareCreate(indexName).addMapping(typeName, mapping).execute().actionGet();\n+ client().admin().indices().aliases(indexAliasesRequest().addAlias(aliasName, indexName)).actionGet();\n+\n+ assertThat(ensureGreen(), equalTo(ClusterHealthStatus.GREEN));\n+\n+ client().index(indexRequest(indexName).type(typeName).id(\"1\").source(jsonBuilder().startObject().field(\"text\", \"elasticsearch index\").endObject())).actionGet();\n+ client().index(indexRequest(indexName).type(typeName).id(\"2\").source(jsonBuilder().startObject().field(\"text\", \"lucene index\").endObject())).actionGet();\n+ client().index(indexRequest(indexName).type(typeName).id(\"3\").source(jsonBuilder().startObject().field(\"text\", \"elasticsearch index\").endObject())).actionGet();\n+ refresh(indexName);\n+\n+ SearchResponse response = client().prepareSearch().setQuery(\n+ new MoreLikeThisQueryBuilder(null, new Item[] {new Item(aliasName, typeName, \"1\")}).minTermFreq(1).minDocFreq(1)).get();\n+ assertHitCount(response, 2L);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"3\"));\n+ }\n+\n public void testMoreLikeThisIssue2197() throws Exception {\n Client client = client();\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"bar\")\n@@ -564,6 +590,7 @@ public void testMoreLikeThisUnlike() throws ExecutionException, InterruptedExcep\n .maxQueryTerms(100)\n .include(true)\n .minimumShouldMatch(\"0%\");\n+\n response = client().prepareSearch(\"test\").setTypes(\"type1\").setQuery(mltQuery).get();\n assertSearchResponse(response);\n assertHitCount(response, numFields - (i + 1));", "filename": "core/src/test/java/org/elasticsearch/search/morelikethis/MoreLikeThisIT.java", "status": "modified" } ] }
{ "body": "In `_cat/indices` API on 2.2.0:\n\n```\nquery_cache.memory_size | fcm,queryCacheMemory | used query cache \npri.query_cache.memory_size | | used query cache \nquery_cache.evictions | fce,queryCacheEvictions | query cache evictions \npri.query_cache.evictions | | query cache evictions \nrequest_cache.memory_size | qcm,queryCacheMemory | used request cache \npri.request_cache.memory_size | | used request cache\n```\n", "comments": [], "number": 17101, "title": "queryCacheMemory is a shorthand for 2 different values" }
{ "body": "Fix #17101\n", "number": 17145, "review_comments": [], "title": "Fix column aliases in _cat/indices, _cat/nodes and _cat/shards APIs" }
{ "commits": [ { "message": "Fix column aliases in _cat/indices, _cat/nodes and _cat/shards APIs #17101" } ], "files": [ { "diff": "@@ -135,22 +135,22 @@ protected Table getTableWithHeader(final RestRequest request) {\n table.addCell(\"fielddata.evictions\", \"sibling:pri;alias:fe,fielddataEvictions;default:false;text-align:right;desc:fielddata evictions\");\n table.addCell(\"pri.fielddata.evictions\", \"default:false;text-align:right;desc:fielddata evictions\");\n \n- table.addCell(\"query_cache.memory_size\", \"sibling:pri;alias:fcm,queryCacheMemory;default:false;text-align:right;desc:used query cache\");\n+ table.addCell(\"query_cache.memory_size\", \"sibling:pri;alias:qcm,queryCacheMemory;default:false;text-align:right;desc:used query cache\");\n table.addCell(\"pri.query_cache.memory_size\", \"default:false;text-align:right;desc:used query cache\");\n \n- table.addCell(\"query_cache.evictions\", \"sibling:pri;alias:fce,queryCacheEvictions;default:false;text-align:right;desc:query cache evictions\");\n+ table.addCell(\"query_cache.evictions\", \"sibling:pri;alias:qce,queryCacheEvictions;default:false;text-align:right;desc:query cache evictions\");\n table.addCell(\"pri.query_cache.evictions\", \"default:false;text-align:right;desc:query cache evictions\");\n \n- table.addCell(\"request_cache.memory_size\", \"sibling:pri;alias:qcm,queryCacheMemory;default:false;text-align:right;desc:used request cache\");\n+ table.addCell(\"request_cache.memory_size\", \"sibling:pri;alias:rcm,requestCacheMemory;default:false;text-align:right;desc:used request cache\");\n table.addCell(\"pri.request_cache.memory_size\", \"default:false;text-align:right;desc:used request cache\");\n \n- table.addCell(\"request_cache.evictions\", \"sibling:pri;alias:qce,queryCacheEvictions;default:false;text-align:right;desc:request cache evictions\");\n+ table.addCell(\"request_cache.evictions\", \"sibling:pri;alias:rce,requestCacheEvictions;default:false;text-align:right;desc:request cache evictions\");\n table.addCell(\"pri.request_cache.evictions\", \"default:false;text-align:right;desc:request cache evictions\");\n \n- table.addCell(\"request_cache.hit_count\", \"sibling:pri;alias:qchc,queryCacheHitCount;default:false;text-align:right;desc:request cache hit count\");\n+ table.addCell(\"request_cache.hit_count\", \"sibling:pri;alias:rchc,requestCacheHitCount;default:false;text-align:right;desc:request cache hit count\");\n table.addCell(\"pri.request_cache.hit_count\", \"default:false;text-align:right;desc:request cache hit count\");\n \n- table.addCell(\"request_cache.miss_count\", \"sibling:pri;alias:qcmc,queryCacheMissCount;default:false;text-align:right;desc:request cache miss count\");\n+ table.addCell(\"request_cache.miss_count\", \"sibling:pri;alias:rcmc,requestCacheMissCount;default:false;text-align:right;desc:request cache miss count\");\n table.addCell(\"pri.request_cache.miss_count\", \"default:false;text-align:right;desc:request cache miss count\");\n \n table.addCell(\"flush.total\", \"sibling:pri;alias:ft,flushTotal;default:false;text-align:right;desc:number of flushes\");", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java", "status": "modified" }, { "diff": "@@ -151,13 +151,13 @@ protected Table getTableWithHeader(final RestRequest request) {\n table.addCell(\"fielddata.memory_size\", \"alias:fm,fielddataMemory;default:false;text-align:right;desc:used fielddata cache\");\n table.addCell(\"fielddata.evictions\", \"alias:fe,fielddataEvictions;default:false;text-align:right;desc:fielddata evictions\");\n \n- table.addCell(\"query_cache.memory_size\", \"alias:fcm,queryCacheMemory;default:false;text-align:right;desc:used query cache\");\n- table.addCell(\"query_cache.evictions\", \"alias:fce,queryCacheEvictions;default:false;text-align:right;desc:query cache evictions\");\n+ table.addCell(\"query_cache.memory_size\", \"alias:qcm,queryCacheMemory;default:false;text-align:right;desc:used query cache\");\n+ table.addCell(\"query_cache.evictions\", \"alias:qce,queryCacheEvictions;default:false;text-align:right;desc:query cache evictions\");\n \n- table.addCell(\"request_cache.memory_size\", \"alias:qcm,requestCacheMemory;default:false;text-align:right;desc:used request cache\");\n- table.addCell(\"request_cache.evictions\", \"alias:qce,requestCacheEvictions;default:false;text-align:right;desc:request cache evictions\");\n- table.addCell(\"request_cache.hit_count\", \"alias:qchc,requestCacheHitCount;default:false;text-align:right;desc:request cache hit counts\");\n- table.addCell(\"request_cache.miss_count\", \"alias:qcmc,requestCacheMissCount;default:false;text-align:right;desc:request cache miss counts\");\n+ table.addCell(\"request_cache.memory_size\", \"alias:rcm,requestCacheMemory;default:false;text-align:right;desc:used request cache\");\n+ table.addCell(\"request_cache.evictions\", \"alias:rce,requestCacheEvictions;default:false;text-align:right;desc:request cache evictions\");\n+ table.addCell(\"request_cache.hit_count\", \"alias:rchc,requestCacheHitCount;default:false;text-align:right;desc:request cache hit counts\");\n+ table.addCell(\"request_cache.miss_count\", \"alias:rcmc,requestCacheMissCount;default:false;text-align:right;desc:request cache miss counts\");\n \n table.addCell(\"flush.total\", \"alias:ft,flushTotal;default:false;text-align:right;desc:number of flushes\");\n table.addCell(\"flush.total_time\", \"alias:ftt,flushTotalTime;default:false;text-align:right;desc:time spent in flush\");", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java", "status": "modified" }, { "diff": "@@ -109,8 +109,8 @@ protected Table getTableWithHeader(final RestRequest request) {\n table.addCell(\"fielddata.memory_size\", \"alias:fm,fielddataMemory;default:false;text-align:right;desc:used fielddata cache\");\n table.addCell(\"fielddata.evictions\", \"alias:fe,fielddataEvictions;default:false;text-align:right;desc:fielddata evictions\");\n \n- table.addCell(\"query_cache.memory_size\", \"alias:fcm,queryCacheMemory;default:false;text-align:right;desc:used query cache\");\n- table.addCell(\"query_cache.evictions\", \"alias:fce,queryCacheEvictions;default:false;text-align:right;desc:query cache evictions\");\n+ table.addCell(\"query_cache.memory_size\", \"alias:qcm,queryCacheMemory;default:false;text-align:right;desc:used query cache\");\n+ table.addCell(\"query_cache.evictions\", \"alias:qce,queryCacheEvictions;default:false;text-align:right;desc:query cache evictions\");\n \n table.addCell(\"flush.total\", \"alias:ft,flushTotal;default:false;text-align:right;desc:number of flushes\");\n table.addCell(\"flush.total_time\", \"alias:ftt,flushTotalTime;default:false;text-align:right;desc:time spent in flush\");", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java", "status": "modified" }, { "diff": "@@ -114,10 +114,18 @@ node (c) |d\n cache memory |0b\n |`fielddata.evictions` |`fe`, `fielddataEvictions` |No |Fielddata cache\n evictions |0\n-|`filter_cache.memory_size` |`fcm`, `filterCacheMemory` |No |Used filter\n+|`query_cache.memory_size` |`qcm`, `queryCacheMemory` |No |Used query\n cache memory |0b\n-|`filter_cache.evictions` |`fce`, `filterCacheEvictions` |No |Filter\n+|`query_cache.evictions` |`qce`, `queryCacheEvictions` |No |Query\n cache evictions |0\n+|`request_cache.memory_size` |`rcm`, `requestCacheMemory` |No | Used request\n+cache memory |0b\n+|`request_cache.evictions` |`rce`, `requestCacheEvictions` |No |Request\n+cache evictions |0\n+|`request_cache.hit_count` |`rchc`, `requestCacheHitCount` |No | Request\n+cache hit count |0\n+|`request_cache.miss_count` |`rcmc`, `requestCacheMissCount` |No | Request\n+cache miss count |0\n |`flush.total` |`ft`, `flushTotal` |No |Number of flushes |1\n |`flush.total_time` |`ftt`, `flushTotalTime` |No |Time spent in flush |1\n |`get.current` |`gc`, `getCurrent` |No |Number of current get", "filename": "docs/reference/cat/nodes.asciidoc", "status": "modified" } ] }
{ "body": "Otherwise if that node is shutdown and restarted it might will have lost all operations\nthat were in the translog.\n\nThis might be responsible for build failures in the bwc tests like here: http://build-us-00.elastic.co/job/es_core_2x_window-2008/437/consoleText\n", "comments": [ { "body": "LGTM\n", "created_at": "2016-01-07T15:35:12Z" } ], "number": 15832, "title": "sync translog to disk after recovery from primary" }
{ "body": "otherwise we might run into #15832\n\ncloses #17130\n", "number": 17142, "review_comments": [], "title": "flush in case we test against version <=2.0.2 before upgrading" }
{ "commits": [ { "message": "[test] flush in case we test against version <=2.0.2 before upgrading\n\notherwise we run into #15832\n\ncloses #17130" }, { "message": "add comment why we need that" } ], "files": [ { "diff": "@@ -353,6 +353,12 @@ public void testIndexRollingUpgrade() throws Exception {\n CountResponse countResponse = client().prepareCount().get();\n assertHitCount(countResponse, numDocs);\n assertSimpleSort(\"num_double\", \"num_int\");\n+ /* There is a bug where we do not fsync the translog after recovery from primary which causes dataloss. It is fixed in all\n+ versions >=2.1.2. But there was no 2.0.3 release which would have the fix for 2.0. Instead, we have to flush here so we do not\n+ run into this bug in the test. See https://github.com/elastic/elasticsearch/pull/15832 */\n+ if (compatibilityVersion().onOrBefore(Version.V_2_0_2)) {\n+ flush();\n+ }\n upgraded = backwardsCluster().upgradeOneNode();\n ensureYellow();\n countResponse = client().prepareCount().get();", "filename": "qa/backwards/shared/src/test/java/org/elasticsearch/bwcompat/BasicBackwardsCompatibilityIT.java", "status": "modified" } ] }
{ "body": "The `constant_score` query will accept multiple clauses in its `filter` clause, but it appears to only run the last query. I believe the intention is for it to only accept a single clause, so multiple query objects should raise a parse exception.\n\nE.g.\n\n``` js\nPOST /test/test/\n{\n \"foo\": \"a\"\n}\n\nPOST /test/test/\n{\n \"foo\": \"b\"\n}\n\nPOST /test/test/\n{\n \"foo\": \"a b\"\n}\n\nGET /test/_search\n{\n \"query\": {\n \"constant_score\": {\n \"filter\": [\n { \"term\": { \"foo\": \"x\" } },\n { \"term\": { \"foo\": \"a\" } }\n ]\n }\n }\n}\n```\n\nWhich yields both documents which have `\"a\"`:\n\n``` json\n{\n \"hits\": {\n \"total\": 2,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"test\",\n \"_type\": \"test\",\n \"_id\": \"AVN8hy3XgPMxNnYnA0TX\",\n \"_score\": 1,\n \"_source\": {\n \"foo\": \"a\"\n }\n },\n {\n \"_index\": \"test\",\n \"_type\": \"test\",\n \"_id\": \"AVN8h2TygPMxNnYnA0Tr\",\n \"_score\": 1,\n \"_source\": {\n \"foo\": \"a b\"\n }\n }\n ]\n }\n}\n```\n\nSwapping the order returns zero documents, because nothing matches `\"x\"`:\n\n``` js\nGET /test/_search\n{\n \"query\": {\n \"constant_score\": {\n \"filter\": [\n { \"term\": { \"foo\": \"a\" } },\n { \"term\": { \"foo\": \"x\" } }\n ]\n }\n }\n}\n```\n\n``` json\n{\n \"hits\": {\n \"total\": 0,\n \"max_score\": null,\n \"hits\": []\n }\n}\n```\n\nThis is especially confusing because the `bool` filter clause _does_ support multiple clauses, and by default AND's them together.\n", "comments": [ { "body": "I agree, unfortunately there are many places in the query parsers where specifying the same parameter twice won't throw an error but silently drops the first one. I'll add a check for this. btw, the array notation only works because those tokens get silently ignored as well.\n", "created_at": "2016-03-16T09:58:31Z" }, { "body": "Thanks @cbuescher! \n", "created_at": "2016-03-16T13:05:32Z" } ], "number": 17126, "title": "constant_score should throw a parse exception if more than one filter is supplied" }
{ "body": "When specifying more than one `filter` in a `constant_score` query, the last one will be the only one that will be executed, overwriting previous filters. It should rather raise a ParseException to notify the user that only one filter query is accepted.\n\nCloses #17126\n", "number": 17135, "review_comments": [], "title": "`constant_score` query should throw error on more than one filter" }
{ "commits": [ { "message": "Query DSL: `constant_score` should throw error on more than one filter\n\nWhen specifying more than one `filter` in a `constant_score`\nquery, the last one will be the only one that will be\nexecuted, overwriting previous filters. It should rather\nraise a ParseException to notify the user that only one\nfilter query is accepted.\n\nCloses #17126" } ], "files": [ { "diff": "@@ -42,7 +42,7 @@ public String[] names() {\n public ConstantScoreQueryBuilder fromXContent(QueryParseContext parseContext) throws IOException {\n XContentParser parser = parseContext.parser();\n \n- QueryBuilder query = null;\n+ QueryBuilder<?> query = null;\n boolean queryFound = false;\n String queryName = null;\n float boost = AbstractQueryBuilder.DEFAULT_BOOST;\n@@ -56,6 +56,10 @@ public ConstantScoreQueryBuilder fromXContent(QueryParseContext parseContext) th\n // skip\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (parseContext.parseFieldMatcher().match(currentFieldName, INNER_QUERY_FIELD)) {\n+ if (queryFound) {\n+ throw new ParsingException(parser.getTokenLocation(), \"[\" + ConstantScoreQueryBuilder.NAME + \"]\"\n+ + \" accepts only one 'filter' element.\");\n+ }\n query = parseContext.parseInnerQueryBuilder();\n queryFound = true;\n } else {\n@@ -69,6 +73,8 @@ public ConstantScoreQueryBuilder fromXContent(QueryParseContext parseContext) th\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"[constant_score] query does not support [\" + currentFieldName + \"]\");\n }\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(), \"unexpected token [\" + token + \"]\");\n }\n }\n if (!queryFound) {", "filename": "core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryParser.java", "status": "modified" }, { "diff": "@@ -54,7 +54,7 @@ protected void doAssertLuceneQuery(ConstantScoreQueryBuilder queryBuilder, Query\n * test that missing \"filter\" element causes {@link ParsingException}\n */\n public void testFilterElement() throws IOException {\n- String queryString = \"{ \\\"\" + ConstantScoreQueryBuilder.NAME + \"\\\" : {}\";\n+ String queryString = \"{ \\\"\" + ConstantScoreQueryBuilder.NAME + \"\\\" : {} }\";\n try {\n parseQuery(queryString);\n fail(\"Expected ParsingException\");\n@@ -63,6 +63,38 @@ public void testFilterElement() throws IOException {\n }\n }\n \n+ /**\n+ * test that multiple \"filter\" elements causes {@link ParsingException}\n+ */\n+ public void testMultipleFilterElements() throws IOException {\n+ String queryString = \"{ \\\"\" + ConstantScoreQueryBuilder.NAME + \"\\\" : {\\n\" +\n+ \"\\\"filter\\\" : { \\\"term\\\": { \\\"foo\\\": \\\"a\\\" } },\\n\" +\n+ \"\\\"filter\\\" : { \\\"term\\\": { \\\"foo\\\": \\\"x\\\" } },\\n\" +\n+ \"} }\";\n+ try {\n+ parseQuery(queryString);\n+ fail(\"Expected ParsingException\");\n+ } catch (ParsingException e) {\n+ assertThat(e.getMessage(), containsString(\"accepts only one 'filter' element\"));\n+ }\n+ }\n+\n+ /**\n+ * test that \"filter\" does not accept an array of queries, throws {@link ParsingException}\n+ */\n+ public void testNoArrayAsFilterElements() throws IOException {\n+ String queryString = \"{ \\\"\" + ConstantScoreQueryBuilder.NAME + \"\\\" : {\\n\" +\n+ \"\\\"filter\\\" : [ { \\\"term\\\": { \\\"foo\\\": \\\"a\\\" } },\\n\" +\n+ \"{ \\\"term\\\": { \\\"foo\\\": \\\"x\\\" } } ]\\n\" +\n+ \"} }\";\n+ try {\n+ parseQuery(queryString);\n+ fail(\"Expected ParsingException\");\n+ } catch (ParsingException e) {\n+ assertThat(e.getMessage(), containsString(\"unexpected token [START_ARRAY]\"));\n+ }\n+ }\n+\n public void testIllegalArguments() {\n try {\n new ConstantScoreQueryBuilder(null);\n@@ -79,16 +111,16 @@ public void testUnknownField() throws IOException {\n \n public void testFromJson() throws IOException {\n String json =\n- \"{\\n\" + \n- \" \\\"constant_score\\\" : {\\n\" + \n- \" \\\"filter\\\" : {\\n\" + \n- \" \\\"terms\\\" : {\\n\" + \n- \" \\\"user\\\" : [ \\\"kimchy\\\", \\\"elasticsearch\\\" ],\\n\" + \n- \" \\\"boost\\\" : 42.0\\n\" + \n- \" }\\n\" + \n- \" },\\n\" + \n- \" \\\"boost\\\" : 23.0\\n\" + \n- \" }\\n\" + \n+ \"{\\n\" +\n+ \" \\\"constant_score\\\" : {\\n\" +\n+ \" \\\"filter\\\" : {\\n\" +\n+ \" \\\"terms\\\" : {\\n\" +\n+ \" \\\"user\\\" : [ \\\"kimchy\\\", \\\"elasticsearch\\\" ],\\n\" +\n+ \" \\\"boost\\\" : 42.0\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" \\\"boost\\\" : 23.0\\n\" +\n+ \" }\\n\" +\n \"}\";\n \n ConstantScoreQueryBuilder parsed = (ConstantScoreQueryBuilder) parseQuery(json);", "filename": "core/src/test/java/org/elasticsearch/index/query/ConstantScoreQueryBuilderTests.java", "status": "modified" } ] }
{ "body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue.\n-->\n\n<!--\nIf you are filing a bug report, please remove the below feature\nrequest block and provide responses for all of the below items.\n-->\n\n**Elasticsearch version**: 2.2.0\n\n**JVM version**: 1.8.0_72\n\n**OS version**: Linux 851a91a0d5ad 4.0.7-boot2docker #1 SMP Wed Jul 15 00:01:41 UTC 2015 x86_64 GNU/Linux\n\nHi, we are currently running a cluster of 2.1.2 and looking to upgrade to 2.2.0, but we are having some issues when it comes to testing our code around percolators. With 2.1.2 everything works fine, but when testing against 2.2.0 we get a return of internal server error with the reason being a null pointer exception. I have tested this with two clean Docker images from the official Elasticsearch images to ensure there's no configuration issues.\n\nThe only thing I could find about this issue is an unanswered posting on Stack Overflow here, http://stackoverflow.com/questions/35451052/elastic-search-percolation-with-bounding-box-geolocation-throws-nullpointerexcep\n\n**Steps to reproduce**:\n 1 Create Percolator Index\n\n``` bash\ncurl -XPOST http://192.168.99.101:32770/test -d '{\"mappings\": {\".percolator\": {\"dynamic\": true,\"properties\": {\"id\": {\"type\" : \"integer\"}}},\"test\": {\"properties\": {\"location\": {\"type\": \"geo_point\",\"lat_lon\": true }}}}}'\n```\n\n 2 Create Percolator Query\n\n``` bash\ncurl -XPUT http://192.168.99.101:32770/test/.percolator/alert-1 -d '{\"query\":{\"filtered\":{\"filter\":{\"bool\":{\"must\":[{\"geo_bounding_box\":{\"location\":{\"top_left\":[-71.09,42.36],\"bottom_right\":[-71.085,42.355]},\"type\":\"indexed\"}}]}}}}}'\n```\n\n 3 Percolator a Document\n\n``` bash\ncurl -XGET http://192.168.99.101:32772/test/test/_percolate?percolate_format=ids&percolate_index=test -d '{\"doc\":{\"location\":{\"lon\":-71.0875,\"lat\":42.3575}}}'\n```\n\n**Output from 2.1.2**\n\n``` json\n{\"took\":3,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"total\":1,\"matches\":[\"alert-1\"]}\n```\n\n**Output from 2.2.0**\n\n``` json\n{\"took\":6,\"_shards\":{\"total\":5,\"successful\":4,\"failed\":1,\"failures\":[{\"shard\":3,\"index\":\"test\",\"status\":\"INTERNAL_SERVER_ERROR\",\"reason\":{\"type\":\"null_pointer_exception\",\"reason\":null}}]},\"total\":0,\"matches\":[]}\n```\n\n**Provide logs (if relevant)**:\n\n```\nRemoteTransportException[[Anelle][127.0.0.1:9301][indices:data/read/percolate[s]]]; nested: PercolateException[failed to percolate]; nested: PercolateException[failed to execute]; nested: NullPointerException;\nCaused by: PercolateException[failed to percolate]; nested: PercolateException[failed to execute]; nested: NullPointerException;\n at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:180)\n at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:55)\n at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:268)\n at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:264)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: PercolateException[failed to execute]; nested: NullPointerException;\n at org.elasticsearch.percolator.PercolatorService$4.doPercolate(PercolatorService.java:583)\n at org.elasticsearch.percolator.PercolatorService.percolate(PercolatorService.java:254)\n at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:177)\n ... 8 more\nCaused by: java.lang.NullPointerException\n at org.apache.lucene.search.GeoPointTermQueryConstantScoreWrapper$1.getDocIDs(GeoPointTermQueryConstantScoreWrapper.java:86)\n at org.apache.lucene.search.GeoPointTermQueryConstantScoreWrapper$1.scorer(GeoPointTermQueryConstantScoreWrapper.java:126)\n at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:628)\n at org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:280)\n at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:628)\n at org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:280)\n at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:628)\n at org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:280)\n at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:628)\n at org.elasticsearch.common.lucene.Lucene.exists(Lucene.java:248)\n at org.elasticsearch.percolator.PercolatorService$4.doPercolate(PercolatorService.java:571)\n ... 10 more\n```\n", "comments": [ { "body": "Thanks for the report @amscotti. A couple follow up questions:\n1. did you reindex your data?\n2. can you post your `geo_point` mapping?\n", "created_at": "2016-02-27T04:18:22Z" }, { "body": "@nknize Thanks for the reply! This is a fully new Elasticsearch setup using Docker to isolate this issue. Following the steps to reproduce we'll create a new index and attempt to percolate a document. Using 2.1.2, the percolation will succeed where in version 2.2.0 you will get a NullPointerException error from the server. Also, the issue isn't just in the Docker image, it also happens on my MacBook running OS X 10.11.3. So I don't think the issue is related to the set up.\n\nThe mapping is included in the steps to reproduce as part of the curl command, but here it is pulled out.\n\n``` json\n{\n \"mappings\":{\n \".percolator\":{\n \"dynamic\":true,\n \"properties\":{\n \"id\":{\n \"type\":\"integer\"\n }\n }\n },\n \"test\":{\n \"properties\":{\n \"location\":{\n \"type\":\"geo_point\",\n \"lat_lon\":true\n }\n }\n }\n }\n}\n```\n", "created_at": "2016-02-27T21:47:24Z" }, { "body": "Confirm same issue on Windows\n**Elasticsearch version**: 2.2.0\n**OS version**: Windows Server 2012\nMappings:\n\n``` javascript\n{\n \"mappings\":{\n \".percolator\":{\n \"properties\": {\n \"query\": {\n \"type\": \"object\",\n \"enabled\": false\n }\n }\n },\n \"test\":{\n \"properties\":{\n \"location\":{\n \"type\":\"geo_point\",\n }\n }\n }\n }\n}\n```\n\nPercolation Query:\n\n``` javascript\n{\n \"_index\": \"test\",\n \"_type\": \".percolator\",\n \"_id\": \"550858\",\n \"_version\": 1,\n \"found\": true,\n \"_source\": {\n \"query\": {\n \"bool\": {\n \"must\": [ \n {\n \"geo_polygon\": {\n \"location\": {\n \"points\": [\n {\n \"lat\": 55.8118391906254,\n \"lon\": 37.526867226289\n },\n {\n \"lat\": 55.8081657829831,\n \"lon\": 37.5251506260451\n },\n {\n \"lat\": 55.8081326519671,\n \"lon\": 37.5251464834297\n },\n {\n \"lat\": 55.8023318307023,\n \"lon\": 37.5263481363022\n },\n {\n \"lat\": 55.8022898138136,\n \"lon\": 37.5263769973219\n },\n {\n \"lat\": 55.7981320290787,\n \"lon\": 37.5316985066514\n },\n {\n \"lat\": 55.7981018643375,\n \"lon\": 37.5317651671351\n },\n {\n \"lat\": 55.7929765434144,\n \"lon\": 37.5530511410252\n },\n {\n \"lat\": 55.792969394719,\n \"lon\": 37.5531051836136\n },\n {\n \"lat\": 55.7924858389984,\n \"lon\": 37.568554672078\n },\n {\n \"lat\": 55.792516839275,\n \"lon\": 37.5686842702771\n },\n {\n \"lat\": 55.7982298947502,\n \"lon\": 37.5774733801441\n },\n {\n \"lat\": 55.7968776363261,\n \"lon\": 37.5790737397055\n },\n {\n \"lat\": 55.7969071958654,\n \"lon\": 37.5793619663257\n },\n {\n \"lat\": 55.7994213421139,\n \"lon\": 37.5803919440669\n },\n {\n \"lat\": 55.7994746916239,\n \"lon\": 37.580384677184\n },\n {\n \"lat\": 55.8009250884466,\n \"lon\": 37.5793547135494\n },\n {\n \"lat\": 55.8009637661386,\n \"lon\": 37.5793021078072\n },\n {\n \"lat\": 55.8080215965125,\n \"lon\": 37.5624792790697\n },\n {\n \"lat\": 55.8080344294469,\n \"lon\": 37.5624365225948\n },\n {\n \"lat\": 55.8138344258045,\n \"lon\": 37.5329107092988\n },\n {\n \"lat\": 55.8138269869868,\n \"lon\": 37.5327768801379\n },\n {\n \"lat\": 55.8118937516748,\n \"lon\": 37.5269403857461\n },\n {\n \"lat\": 55.8118391906254,\n \"lon\": 37.526867226289\n }\n ]\n }\n }\n }\n ]\n }\n }\n }\n}\n```\n\nDocument percolation\n\n``` javascript\nGET test/test/_percolate\n{\n \"doc\":{\n \"location\":{\n \"lon\": -71.0875,\n \"lat\": 42.3575\n }\n }\n}\n```\n\nI've tested same query on Elastic 2.1.2 and it's works correctly\n", "created_at": "2016-02-28T11:01:46Z" }, { "body": "So the problem is Lucene's `MemoryIndex` does not support DocValues, and the new GeoPointField requires DocValues for post filtering. @martijnvg will open a separate issue to add DocValue support and we will reference it on this issue.\n", "created_at": "2016-03-04T16:46:40Z" }, { "body": "This is the issue that adds doc values support to the MemoryIndex: https://issues.apache.org/jira/browse/LUCENE-7091\n", "created_at": "2016-03-15T08:22:34Z" }, { "body": "Doc values support to the MemoryIndex has been added and will be included in the Lucene 6.0 release (which hasn't been released yet). Elasticsearch 5.0 will depend on this, so from 5.0 this issue is fixed.\n\nI opened #17105 to backport / fork the fix in 2.x, but there are concerns with forking Memory index and we should not do that.\n\nLuckily there is a workaround for this issue which can help anyone running into this issue. This problems mentioned in this issue only happen if the new geo query implementing are used. We still use the old geo query implementations for indices created before 2.2.0. There is an index setting (`index.version.created`) that we can use to let ES think an index was created on a cluster that ran with an older ES version. If we set it to a version < 2.2.0 then ES uses the older geo query implementations (only for the index we set this created version for).\n\nNote1: You shouldn't set this setting on already created indices before the upgrade to > 2.2. Only for new percolator indices (after to upgrade to > 2.2).\nNote2: Only set this setting on indices that hold your percolator queries. Using this setting on all indices, would mean you can't benefit from geo improvements when running regular searches. If your percolator queries co-exist in the same index holding your data then you should move your percolator queries into a dedicated index.\n\nI this case we can set `index.version.created` to `2010299` (version id of version 2.1.2) and `geo_bounding_box`, `geo_polygon` and other geo queries will work as they did before ES 2.2 in the percolator. Example based on the initial post:\n\n```\ncurl -XPUT \"http://localhost:9200/test\" -d'\n{\n \"settings\": {\n \"index.version.created\" : 2010299\n }, \n \"mappings\": {\n \".percolator\": {\n \"dynamic\": true,\n \"properties\": {\n \"id\": {\n \"type\": \"integer\"\n }\n }\n },\n \"test\": {\n \"properties\": {\n \"location\": {\n \"type\": \"geo_point\",\n \"lat_lon\": true\n }\n }\n }\n }\n}'\n```\n", "created_at": "2016-03-16T13:29:00Z" }, { "body": "Closing this issue as it has been fixed from 5.0 and beyond.", "created_at": "2017-06-28T12:32:28Z" } ], "number": 16832, "title": "Percolation with Bounding Box GeoLocation throws NullPointerException in Lucene" }
{ "body": "This forks changes from the MemoryIndex to add doc values support [1] to the MemoryIndex in 2.x This PR is just this branch and fixes issues like #16832 where geo queries that depend on doc values don't fail or silently return no hits.\n\nOnce master upgrades to next Lucene 6 snapshot then #16832 is fixed as well.\n\nThe back porting of the MemoryIndex is something I don't like. But it does fix a big shortcoming that happens in version 2.2 and higher. (Classifying data based on geo queries is a common use for the percolator.) I'm not sure whether we should push this change.\n\nCloses #16832 \n\n1: https://issues.apache.org/jira/browse/LUCENE-7091\n", "number": 17105, "review_comments": [], "title": "Forked memory index to add support for doc values." }
{ "commits": [ { "message": "percolator: Forked memory index to add support for doc values.\n\nThis fixes issues like #16832 where geo queries that depend on doc values don't fail or silently return no hits.\n\nCloses #16832" } ], "files": [ { "diff": "@@ -0,0 +1,1517 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.apache.lucene.index.memory;\n+\n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.Iterator;\n+import java.util.Map;\n+import java.util.SortedMap;\n+import java.util.TreeMap;\n+\n+import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.analysis.TokenStream;\n+import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;\n+import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;\n+import org.apache.lucene.analysis.tokenattributes.PayloadAttribute;\n+import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n+import org.apache.lucene.analysis.tokenattributes.TermToBytesRefAttribute;\n+import org.apache.lucene.document.Document;\n+import org.apache.lucene.index.*;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.Scorer;\n+import org.apache.lucene.search.SimpleCollector;\n+import org.apache.lucene.search.similarities.Similarity;\n+import org.apache.lucene.store.RAMDirectory;\n+import org.apache.lucene.util.*;\n+import org.apache.lucene.util.BytesRefHash.DirectBytesStartArray;\n+import org.apache.lucene.util.IntBlockPool.SliceReader;\n+import org.apache.lucene.util.IntBlockPool.SliceWriter;\n+\n+// FORK from Lucene's MemoryIndex from master branch\n+// Changes:\n+// 1) Renamed to ForkedMemoryIndex (to avoid JAR HELL)\n+// 2) Removed any reference to point values, because doesn't exist in Lucene 5.x\n+// 3) Integer.BYTES -> RamUsageEstimator.NUM_BYTES_INT\n+// 4) MemoryIndexReader#fields() to use pre Java 8 of filtering out fields that are only doc values\n+// 5) Made info variable in MemoryIndexReader#getSortedSetValues() final\n+// 6) Changed license header\n+// 7) Made ForkedMemoryIndex(boolean, boolean, long) constructor public\n+// 8) Pass BytesRef.getUTF8SortedAsUnicodeComparator() to BytesRefHash#sort(...)\n+// 9) Use explicit generics in MemoryIndex#addField(...) attributes empty map\n+\n+/**\n+ * High-performance single-document main memory Apache Lucene fulltext search index.\n+ * <p>\n+ * <b>Overview</b>\n+ * <p>\n+ * This class is a replacement/substitute for a large subset of\n+ * {@link RAMDirectory} functionality. It is designed to\n+ * enable maximum efficiency for on-the-fly matchmaking combining structured and\n+ * fuzzy fulltext search in realtime streaming applications such as Nux XQuery based XML\n+ * message queues, publish-subscribe systems for Blogs/newsfeeds, text chat, data acquisition and\n+ * distribution systems, application level routers, firewalls, classifiers, etc.\n+ * Rather than targeting fulltext search of infrequent queries over huge persistent\n+ * data archives (historic search), this class targets fulltext search of huge\n+ * numbers of queries over comparatively small transient realtime data (prospective\n+ * search).\n+ * For example as in\n+ * <pre class=\"prettyprint\">\n+ * float score = search(String text, Query query)\n+ * </pre>\n+ * <p>\n+ * Each instance can hold at most one Lucene \"document\", with a document containing\n+ * zero or more \"fields\", each field having a name and a fulltext value. The\n+ * fulltext value is tokenized (split and transformed) into zero or more index terms\n+ * (aka words) on <code>addField()</code>, according to the policy implemented by an\n+ * Analyzer. For example, Lucene analyzers can split on whitespace, normalize to lower case\n+ * for case insensitivity, ignore common terms with little discriminatory value such as \"he\", \"in\", \"and\" (stop\n+ * words), reduce the terms to their natural linguistic root form such as \"fishing\"\n+ * being reduced to \"fish\" (stemming), resolve synonyms/inflexions/thesauri\n+ * (upon indexing and/or querying), etc. For details, see\n+ * <a target=\"_blank\" href=\"http://today.java.net/pub/a/today/2003/07/30/LuceneIntro.html\">Lucene Analyzer Intro</a>.\n+ * <p>\n+ * Arbitrary Lucene queries can be run against this class - see <a target=\"_blank\"\n+ * href=\"{@docRoot}/../queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package_description\">\n+ * Lucene Query Syntax</a>\n+ * as well as <a target=\"_blank\"\n+ * href=\"http://today.java.net/pub/a/today/2003/11/07/QueryParserRules.html\">Query Parser Rules</a>.\n+ * Note that a Lucene query selects on the field names and associated (indexed)\n+ * tokenized terms, not on the original fulltext(s) - the latter are not stored\n+ * but rather thrown away immediately after tokenization.\n+ * <p>\n+ * For some interesting background information on search technology, see Bob Wyman's\n+ * <a target=\"_blank\"\n+ * href=\"http://bobwyman.pubsub.com/main/2005/05/mary_hodder_poi.html\">Prospective Search</a>,\n+ * Jim Gray's\n+ * <a target=\"_blank\" href=\"http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=293&page=4\">\n+ * A Call to Arms - Custom subscriptions</a>, and Tim Bray's\n+ * <a target=\"_blank\"\n+ * href=\"http://www.tbray.org/ongoing/When/200x/2003/07/30/OnSearchTOC\">On Search, the Series</a>.\n+ *\n+ * <p>\n+ * <b>Example Usage</b>\n+ * <br>\n+ * <pre class=\"prettyprint\">\n+ * Analyzer analyzer = new SimpleAnalyzer(version);\n+ * MemoryIndex index = new MemoryIndex();\n+ * index.addField(\"content\", \"Readings about Salmons and other select Alaska fishing Manuals\", analyzer);\n+ * index.addField(\"author\", \"Tales of James\", analyzer);\n+ * QueryParser parser = new QueryParser(version, \"content\", analyzer);\n+ * float score = index.search(parser.parse(\"+author:james +salmon~ +fish* manual~\"));\n+ * if (score &gt; 0.0f) {\n+ * System.out.println(\"it's a match\");\n+ * } else {\n+ * System.out.println(\"no match found\");\n+ * }\n+ * System.out.println(\"indexData=\" + index.toString());\n+ * </pre>\n+ *\n+ * <p>\n+ * <b>Example XQuery Usage</b>\n+ *\n+ * <pre class=\"prettyprint\">\n+ * (: An XQuery that finds all books authored by James that have something to do with \"salmon fishing manuals\", sorted by relevance :)\n+ * declare namespace lucene = \"java:nux.xom.pool.FullTextUtil\";\n+ * declare variable $query := \"+salmon~ +fish* manual~\"; (: any arbitrary Lucene query can go here :)\n+ *\n+ * for $book in /books/book[author=\"James\" and lucene:match(abstract, $query) &gt; 0.0]\n+ * let $score := lucene:match($book/abstract, $query)\n+ * order by $score descending\n+ * return $book\n+ * </pre>\n+ *\n+ * <p>\n+ * <b>Thread safety guarantees</b>\n+ * <p>\n+ * MemoryIndex is not normally thread-safe for adds or queries. However, queries\n+ * are thread-safe after {@code freeze()} has been called.\n+ *\n+ * <p>\n+ * <b>Performance Notes</b>\n+ * <p>\n+ * Internally there's a new data structure geared towards efficient indexing\n+ * and searching, plus the necessary support code to seamlessly plug into the Lucene\n+ * framework.\n+ * <p>\n+ * This class performs very well for very small texts (e.g. 10 chars)\n+ * as well as for large texts (e.g. 10 MB) and everything in between.\n+ * Typically, it is about 10-100 times faster than <code>RAMDirectory</code>.\n+ * Note that <code>RAMDirectory</code> has particularly\n+ * large efficiency overheads for small to medium sized texts, both in time and space.\n+ * Indexing a field with N tokens takes O(N) in the best case, and O(N logN) in the worst\n+ * case. Memory consumption is probably larger than for <code>RAMDirectory</code>.\n+ * <p>\n+ * Example throughput of many simple term queries over a single MemoryIndex:\n+ * ~500000 queries/sec on a MacBook Pro, jdk 1.5.0_06, server VM.\n+ * As always, your mileage may vary.\n+ * <p>\n+ * If you're curious about\n+ * the whereabouts of bottlenecks, run java 1.5 with the non-perturbing '-server\n+ * -agentlib:hprof=cpu=samples,depth=10' flags, then study the trace log and\n+ * correlate its hotspot trailer with its call stack headers (see <a\n+ * target=\"_blank\"\n+ * href=\"http://java.sun.com/developer/technicalArticles/Programming/HPROF.html\">\n+ * hprof tracing </a>).\n+ *\n+ */\n+public class ForkedMemoryIndex {\n+\n+ private static final boolean DEBUG = false;\n+\n+ /** info for each field: Map&lt;String fieldName, Info field&gt; */\n+ private final SortedMap<String,Info> fields = new TreeMap<>();\n+\n+ private final boolean storeOffsets;\n+ private final boolean storePayloads;\n+\n+ private final ByteBlockPool byteBlockPool;\n+ private final IntBlockPool intBlockPool;\n+ // private final IntBlockPool.SliceReader postingsReader;\n+ private final IntBlockPool.SliceWriter postingsWriter;\n+ private final BytesRefArray payloadsBytesRefs;//non null only when storePayloads\n+\n+ private Counter bytesUsed;\n+\n+ private boolean frozen = false;\n+\n+ private Similarity normSimilarity = IndexSearcher.getDefaultSimilarity();\n+\n+ /**\n+ * Constructs an empty instance that will not store offsets or payloads.\n+ */\n+ public ForkedMemoryIndex() {\n+ this(false);\n+ }\n+\n+ /**\n+ * Constructs an empty instance that can optionally store the start and end\n+ * character offset of each token term in the text. This can be useful for\n+ * highlighting of hit locations with the Lucene highlighter package. But\n+ * it will not store payloads; use another constructor for that.\n+ *\n+ * @param storeOffsets\n+ * whether or not to store the start and end character offset of\n+ * each token term in the text\n+ */\n+ public ForkedMemoryIndex(boolean storeOffsets) {\n+ this(storeOffsets, false);\n+ }\n+\n+ /**\n+ * Constructs an empty instance with the option of storing offsets and payloads.\n+ *\n+ * @param storeOffsets store term offsets at each position\n+ * @param storePayloads store term payloads at each position\n+ */\n+ public ForkedMemoryIndex(boolean storeOffsets, boolean storePayloads) {\n+ this(storeOffsets, storePayloads, 0);\n+ }\n+\n+ /**\n+ * Expert: This constructor accepts an upper limit for the number of bytes that should be reused if this instance is {@link #reset()}.\n+ * The payload storage, if used, is unaffected by maxReusuedBytes, however.\n+ * @param storeOffsets <code>true</code> if offsets should be stored\n+ * @param storePayloads <code>true</code> if payloads should be stored\n+ * @param maxReusedBytes the number of bytes that should remain in the internal memory pools after {@link #reset()} is called\n+ */\n+ public ForkedMemoryIndex(boolean storeOffsets, boolean storePayloads, long maxReusedBytes) {\n+ this.storeOffsets = storeOffsets;\n+ this.storePayloads = storePayloads;\n+ this.bytesUsed = Counter.newCounter();\n+ final int maxBufferedByteBlocks = (int)((maxReusedBytes/2) / ByteBlockPool.BYTE_BLOCK_SIZE );\n+ final int maxBufferedIntBlocks = (int) ((maxReusedBytes - (maxBufferedByteBlocks*ByteBlockPool.BYTE_BLOCK_SIZE))/(IntBlockPool.INT_BLOCK_SIZE * RamUsageEstimator.NUM_BYTES_INT));\n+ assert (maxBufferedByteBlocks * ByteBlockPool.BYTE_BLOCK_SIZE) + (maxBufferedIntBlocks * IntBlockPool.INT_BLOCK_SIZE * RamUsageEstimator.NUM_BYTES_INT) <= maxReusedBytes;\n+ byteBlockPool = new ByteBlockPool(new RecyclingByteBlockAllocator(ByteBlockPool.BYTE_BLOCK_SIZE, maxBufferedByteBlocks, bytesUsed));\n+ intBlockPool = new IntBlockPool(new RecyclingIntBlockAllocator(IntBlockPool.INT_BLOCK_SIZE, maxBufferedIntBlocks, bytesUsed));\n+ postingsWriter = new SliceWriter(intBlockPool);\n+ //TODO refactor BytesRefArray to allow us to apply maxReusedBytes option\n+ payloadsBytesRefs = storePayloads ? new BytesRefArray(bytesUsed) : null;\n+ }\n+\n+ /**\n+ * Convenience method; Tokenizes the given field text and adds the resulting\n+ * terms to the index; Equivalent to adding an indexed non-keyword Lucene\n+ * {@link org.apache.lucene.document.Field} that is tokenized, not stored,\n+ * termVectorStored with positions (or termVectorStored with positions and offsets),\n+ *\n+ * @param fieldName\n+ * a name to be associated with the text\n+ * @param text\n+ * the text to tokenize and index.\n+ * @param analyzer\n+ * the analyzer to use for tokenization\n+ */\n+ public void addField(String fieldName, String text, Analyzer analyzer) {\n+ if (fieldName == null)\n+ throw new IllegalArgumentException(\"fieldName must not be null\");\n+ if (text == null)\n+ throw new IllegalArgumentException(\"text must not be null\");\n+ if (analyzer == null)\n+ throw new IllegalArgumentException(\"analyzer must not be null\");\n+\n+ TokenStream stream = analyzer.tokenStream(fieldName, text);\n+ addField(fieldName, stream, 1.0f, analyzer.getPositionIncrementGap(fieldName), analyzer.getOffsetGap(fieldName), DocValuesType.NONE, null);\n+ }\n+\n+ /**\n+ * Builds a MemoryIndex from a lucene {@link Document} using an analyzer\n+ *\n+ * @param document the document to index\n+ * @param analyzer the analyzer to use\n+ * @return a MemoryIndex\n+ */\n+ public static MemoryIndex fromDocument(Iterable<? extends IndexableField> document, Analyzer analyzer) {\n+ return fromDocument(document, analyzer, false, false, 0);\n+ }\n+\n+ /**\n+ * Builds a MemoryIndex from a lucene {@link Document} using an analyzer\n+ * @param document the document to index\n+ * @param analyzer the analyzer to use\n+ * @param storeOffsets <code>true</code> if offsets should be stored\n+ * @param storePayloads <code>true</code> if payloads should be stored\n+ * @return a MemoryIndex\n+ */\n+ public static MemoryIndex fromDocument(Iterable<? extends IndexableField> document, Analyzer analyzer, boolean storeOffsets, boolean storePayloads) {\n+ return fromDocument(document, analyzer, storeOffsets, storePayloads, 0);\n+ }\n+\n+ /**\n+ * Builds a MemoryIndex from a lucene {@link Document} using an analyzer\n+ * @param document the document to index\n+ * @param analyzer the analyzer to use\n+ * @param storeOffsets <code>true</code> if offsets should be stored\n+ * @param storePayloads <code>true</code> if payloads should be stored\n+ * @param maxReusedBytes the number of bytes that should remain in the internal memory pools after {@link #reset()} is called\n+ * @return a MemoryIndex\n+ */\n+ public static MemoryIndex fromDocument(Iterable<? extends IndexableField> document, Analyzer analyzer, boolean storeOffsets, boolean storePayloads, long maxReusedBytes) {\n+ MemoryIndex mi = new MemoryIndex(storeOffsets, storePayloads, maxReusedBytes);\n+ for (IndexableField field : document) {\n+ mi.addField(field, analyzer);\n+ }\n+ return mi;\n+ }\n+\n+ /**\n+ * Convenience method; Creates and returns a token stream that generates a\n+ * token for each keyword in the given collection, \"as is\", without any\n+ * transforming text analysis. The resulting token stream can be fed into\n+ * {@link #addField(String, TokenStream)}, perhaps wrapped into another\n+ * {@link org.apache.lucene.analysis.TokenFilter}, as desired.\n+ *\n+ * @param keywords\n+ * the keywords to generate tokens for\n+ * @return the corresponding token stream\n+ */\n+ public <T> TokenStream keywordTokenStream(final Collection<T> keywords) {\n+ // TODO: deprecate & move this method into AnalyzerUtil?\n+ if (keywords == null)\n+ throw new IllegalArgumentException(\"keywords must not be null\");\n+\n+ return new TokenStream() {\n+ private Iterator<T> iter = keywords.iterator();\n+ private int start = 0;\n+ private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);\n+ private final OffsetAttribute offsetAtt = addAttribute(OffsetAttribute.class);\n+\n+ @Override\n+ public boolean incrementToken() {\n+ if (!iter.hasNext()) return false;\n+\n+ T obj = iter.next();\n+ if (obj == null)\n+ throw new IllegalArgumentException(\"keyword must not be null\");\n+\n+ String term = obj.toString();\n+ clearAttributes();\n+ termAtt.setEmpty().append(term);\n+ offsetAtt.setOffset(start, start+termAtt.length());\n+ start += term.length() + 1; // separate words by 1 (blank) character\n+ return true;\n+ }\n+ };\n+ }\n+\n+ /**\n+ * Equivalent to <code>addField(fieldName, stream, 1.0f)</code>.\n+ *\n+ * @param fieldName\n+ * a name to be associated with the text\n+ * @param stream\n+ * the token stream to retrieve tokens from\n+ */\n+ public void addField(String fieldName, TokenStream stream) {\n+ addField(fieldName, stream, 1.0f);\n+ }\n+\n+ /**\n+ * Adds a lucene {@link IndexableField} to the MemoryIndex using the provided analyzer.\n+ * Also stores doc values based on {@link IndexableFieldType#docValuesType()} if set.\n+ *\n+ * @param field the field to add\n+ * @param analyzer the analyzer to use for term analysis\n+ * @throws IllegalArgumentException if the field is a DocValues or Point field, as these\n+ * structures are not supported by MemoryIndex\n+ */\n+ public void addField(IndexableField field, Analyzer analyzer) {\n+ addField(field, analyzer, 1.0f);\n+ }\n+\n+ /**\n+ * Adds a lucene {@link IndexableField} to the MemoryIndex using the provided analyzer.\n+ * Also stores doc values based on {@link IndexableFieldType#docValuesType()} if set.\n+ *\n+ * @param field the field to add\n+ * @param analyzer the analyzer to use for term analysis\n+ * @param boost a field boost\n+ * @throws IllegalArgumentException if the field is a DocValues or Point field, as these\n+ * structures are not supported by MemoryIndex\n+ */\n+ public void addField(IndexableField field, Analyzer analyzer, float boost) {\n+ int offsetGap;\n+ TokenStream tokenStream;\n+ int positionIncrementGap;\n+ if (analyzer != null) {\n+ offsetGap = analyzer.getOffsetGap(field.name());\n+ tokenStream = field.tokenStream(analyzer, null);\n+ positionIncrementGap = analyzer.getPositionIncrementGap(field.name());\n+ } else {\n+ offsetGap = 1;\n+ tokenStream = field.tokenStream(null, null);\n+ positionIncrementGap = 0;\n+ }\n+\n+ DocValuesType docValuesType = field.fieldType().docValuesType();\n+ Object docValuesValue;\n+ switch (docValuesType) {\n+ case NONE:\n+ docValuesValue = null;\n+ break;\n+ case BINARY:\n+ case SORTED:\n+ case SORTED_SET:\n+ docValuesValue = field.binaryValue();\n+ break;\n+ case NUMERIC:\n+ case SORTED_NUMERIC:\n+ docValuesValue = field.numericValue();\n+ break;\n+ default:\n+ throw new UnsupportedOperationException(\"unknown doc values type [\" + docValuesType + \"]\");\n+ }\n+ addField(field.name(), tokenStream, boost, positionIncrementGap, offsetGap, docValuesType, docValuesValue);\n+ }\n+\n+ /**\n+ * Iterates over the given token stream and adds the resulting terms to the index;\n+ * Equivalent to adding a tokenized, indexed, termVectorStored, unstored,\n+ * Lucene {@link org.apache.lucene.document.Field}.\n+ * Finally closes the token stream. Note that untokenized keywords can be added with this method via\n+ * {@link #keywordTokenStream(Collection)}, the Lucene <code>KeywordTokenizer</code> or similar utilities.\n+ *\n+ * @param fieldName\n+ * a name to be associated with the text\n+ * @param stream\n+ * the token stream to retrieve tokens from.\n+ * @param boost\n+ * the boost factor for hits for this field\n+ *\n+ * @see org.apache.lucene.document.Field#setBoost(float)\n+ */\n+ public void addField(String fieldName, TokenStream stream, float boost) {\n+ addField(fieldName, stream, boost, 0);\n+ }\n+\n+\n+ /**\n+ * Iterates over the given token stream and adds the resulting terms to the index;\n+ * Equivalent to adding a tokenized, indexed, termVectorStored, unstored,\n+ * Lucene {@link org.apache.lucene.document.Field}.\n+ * Finally closes the token stream. Note that untokenized keywords can be added with this method via\n+ * {@link #keywordTokenStream(Collection)}, the Lucene <code>KeywordTokenizer</code> or similar utilities.\n+ *\n+ * @param fieldName\n+ * a name to be associated with the text\n+ * @param stream\n+ * the token stream to retrieve tokens from.\n+ * @param boost\n+ * the boost factor for hits for this field\n+ *\n+ * @param positionIncrementGap\n+ * the position increment gap if fields with the same name are added more than once\n+ *\n+ *\n+ * @see org.apache.lucene.document.Field#setBoost(float)\n+ */\n+ public void addField(String fieldName, TokenStream stream, float boost, int positionIncrementGap) {\n+ addField(fieldName, stream, boost, positionIncrementGap, 1);\n+ }\n+\n+ /**\n+ * Iterates over the given token stream and adds the resulting terms to the index;\n+ * Equivalent to adding a tokenized, indexed, termVectorStored, unstored,\n+ * Lucene {@link org.apache.lucene.document.Field}.\n+ * Finally closes the token stream. Note that untokenized keywords can be added with this method via\n+ * {@link #keywordTokenStream(Collection)}, the Lucene <code>KeywordTokenizer</code> or similar utilities.\n+ *\n+ *\n+ * @param fieldName\n+ * a name to be associated with the text\n+ * @param tokenStream\n+ * the token stream to retrieve tokens from. It's guaranteed to be closed no matter what.\n+ * @param boost\n+ * the boost factor for hits for this field\n+ * @param positionIncrementGap\n+ * the position increment gap if fields with the same name are added more than once\n+ * @param offsetGap\n+ * the offset gap if fields with the same name are added more than once\n+ * @see org.apache.lucene.document.Field#setBoost(float)\n+ */\n+ public void addField(String fieldName, TokenStream tokenStream, float boost, int positionIncrementGap, int offsetGap) {\n+ addField(fieldName, tokenStream, boost, positionIncrementGap, offsetGap, DocValuesType.NONE, null);\n+ }\n+\n+ private void addField(String fieldName, TokenStream tokenStream, float boost, int positionIncrementGap, int offsetGap,\n+ DocValuesType docValuesType, Object docValuesValue) {\n+\n+ if (frozen) {\n+ throw new IllegalArgumentException(\"Cannot call addField() when MemoryIndex is frozen\");\n+ }\n+ if (fieldName == null) {\n+ throw new IllegalArgumentException(\"fieldName must not be null\");\n+ }\n+ if (boost <= 0.0f) {\n+ throw new IllegalArgumentException(\"boost factor must be greater than 0.0\");\n+ }\n+\n+ Info info = fields.get(fieldName);\n+ if (info == null) {\n+ IndexOptions indexOptions = storeOffsets ? IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS : IndexOptions.DOCS_AND_FREQS_AND_POSITIONS;\n+ FieldInfo fieldInfo = new FieldInfo(fieldName, fields.size(), true, false, storePayloads, indexOptions, docValuesType, -1, Collections.<String, String>emptyMap());\n+ fields.put(fieldName, info = new Info(fieldInfo, byteBlockPool));\n+ }\n+\n+ if (docValuesType != DocValuesType.NONE) {\n+ storeDocValues(info, docValuesType, docValuesValue);\n+ }\n+ if (tokenStream != null) {\n+ storeTerms(info, tokenStream, boost, positionIncrementGap, offsetGap);\n+ }\n+ }\n+\n+ private void storeDocValues(Info info, DocValuesType docValuesType, Object docValuesValue) {\n+ String fieldName = info.fieldInfo.name;\n+ DocValuesType existingDocValuesType = info.fieldInfo.getDocValuesType();\n+ if (existingDocValuesType == DocValuesType.NONE) {\n+ // first time we add doc values for this field:\n+ info.fieldInfo = new FieldInfo(\n+ info.fieldInfo.name, info.fieldInfo.number, info.fieldInfo.hasVectors(), info.fieldInfo.hasPayloads(),\n+ info.fieldInfo.hasPayloads(), info.fieldInfo.getIndexOptions(), docValuesType, -1, info.fieldInfo.attributes()\n+ );\n+ } else if (existingDocValuesType != docValuesType) {\n+ throw new IllegalArgumentException(\"Can't add [\" + docValuesType + \"] doc values field [\" + fieldName + \"], because [\" + existingDocValuesType + \"] doc values field already exists\");\n+ }\n+ switch (docValuesType) {\n+ case NUMERIC:\n+ if (info.numericProducer.dvLongValues != null) {\n+ throw new IllegalArgumentException(\"Only one value per field allowed for [\" + docValuesType + \"] doc values field [\" + fieldName + \"]\");\n+ }\n+ info.numericProducer.dvLongValues = new long[]{(long) docValuesValue};\n+ info.numericProducer.count++;\n+ break;\n+ case SORTED_NUMERIC:\n+ if (info.numericProducer.dvLongValues == null) {\n+ info.numericProducer.dvLongValues = new long[4];\n+ }\n+ info.numericProducer.dvLongValues = ArrayUtil.grow(info.numericProducer.dvLongValues, info.numericProducer.count + 1);\n+ info.numericProducer.dvLongValues[info.numericProducer.count++] = (long) docValuesValue;\n+ break;\n+ case BINARY:\n+ if (info.binaryProducer.dvBytesValuesSet != null) {\n+ throw new IllegalArgumentException(\"Only one value per field allowed for [\" + docValuesType + \"] doc values field [\" + fieldName + \"]\");\n+ }\n+ info.binaryProducer.dvBytesValuesSet = new BytesRefHash(byteBlockPool);\n+ info.binaryProducer.dvBytesValuesSet.add((BytesRef) docValuesValue);\n+ break;\n+ case SORTED:\n+ if (info.binaryProducer.dvBytesValuesSet != null) {\n+ throw new IllegalArgumentException(\"Only one value per field allowed for [\" + docValuesType + \"] doc values field [\" + fieldName + \"]\");\n+ }\n+ info.binaryProducer.dvBytesValuesSet = new BytesRefHash(byteBlockPool);\n+ info.binaryProducer.dvBytesValuesSet.add((BytesRef) docValuesValue);\n+ break;\n+ case SORTED_SET:\n+ if (info.binaryProducer.dvBytesValuesSet == null) {\n+ info.binaryProducer.dvBytesValuesSet = new BytesRefHash(byteBlockPool);\n+ }\n+ info.binaryProducer.dvBytesValuesSet.add((BytesRef) docValuesValue);\n+ break;\n+ default:\n+ throw new UnsupportedOperationException(\"unknown doc values type [\" + docValuesType + \"]\");\n+ }\n+ }\n+\n+ private void storeTerms(Info info, TokenStream tokenStream, float boost, int positionIncrementGap, int offsetGap) {\n+ int pos = -1;\n+ int offset = 0;\n+ if (info.numTokens == 0) {\n+ info.boost = boost;\n+ } else if (info.numTokens > 0) {\n+ pos = info.lastPosition + positionIncrementGap;\n+ offset = info.lastOffset + offsetGap;\n+ info.boost *= boost;\n+ }\n+\n+ try (TokenStream stream = tokenStream) {\n+ TermToBytesRefAttribute termAtt = stream.getAttribute(TermToBytesRefAttribute.class);\n+ PositionIncrementAttribute posIncrAttribute = stream.addAttribute(PositionIncrementAttribute.class);\n+ OffsetAttribute offsetAtt = stream.addAttribute(OffsetAttribute.class);\n+ PayloadAttribute payloadAtt = storePayloads ? stream.addAttribute(PayloadAttribute.class) : null;\n+ stream.reset();\n+\n+ while (stream.incrementToken()) {\n+// if (DEBUG) System.err.println(\"token='\" + term + \"'\");\n+ info.numTokens++;\n+ final int posIncr = posIncrAttribute.getPositionIncrement();\n+ if (posIncr == 0) {\n+ info.numOverlapTokens++;\n+ }\n+ pos += posIncr;\n+ int ord = info.terms.add(termAtt.getBytesRef());\n+ if (ord < 0) {\n+ ord = (-ord) - 1;\n+ postingsWriter.reset(info.sliceArray.end[ord]);\n+ } else {\n+ info.sliceArray.start[ord] = postingsWriter.startNewSlice();\n+ }\n+ info.sliceArray.freq[ord]++;\n+ info.sumTotalTermFreq++;\n+ postingsWriter.writeInt(pos);\n+ if (storeOffsets) {\n+ postingsWriter.writeInt(offsetAtt.startOffset() + offset);\n+ postingsWriter.writeInt(offsetAtt.endOffset() + offset);\n+ }\n+ if (storePayloads) {\n+ final BytesRef payload = payloadAtt.getPayload();\n+ final int pIndex;\n+ if (payload == null || payload.length == 0) {\n+ pIndex = -1;\n+ } else {\n+ pIndex = payloadsBytesRefs.append(payload);\n+ }\n+ postingsWriter.writeInt(pIndex);\n+ }\n+ info.sliceArray.end[ord] = postingsWriter.getCurrentOffset();\n+ }\n+ stream.end();\n+ if (info.numTokens > 0) {\n+ info.lastPosition = pos;\n+ info.lastOffset = offsetAtt.endOffset() + offset;\n+ }\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n+ }\n+ }\n+\n+ /**\n+ * Set the Similarity to be used for calculating field norms\n+ */\n+ public void setSimilarity(Similarity similarity) {\n+ if (frozen)\n+ throw new IllegalArgumentException(\"Cannot set Similarity when MemoryIndex is frozen\");\n+ if (this.normSimilarity == similarity)\n+ return;\n+ this.normSimilarity = similarity;\n+ //invalidate any cached norms that may exist\n+ for (Info info : fields.values()) {\n+ info.norms = null;\n+ }\n+ }\n+\n+ /**\n+ * Creates and returns a searcher that can be used to execute arbitrary\n+ * Lucene queries and to collect the resulting query results as hits.\n+ *\n+ * @return a searcher\n+ */\n+ public IndexSearcher createSearcher() {\n+ MemoryIndexReader reader = new MemoryIndexReader();\n+ IndexSearcher searcher = new IndexSearcher(reader); // ensures no auto-close !!\n+ searcher.setSimilarity(normSimilarity);\n+ return searcher;\n+ }\n+\n+ /**\n+ * Prepares the MemoryIndex for querying in a non-lazy way.\n+ * <p>\n+ * After calling this you can query the MemoryIndex from multiple threads, but you\n+ * cannot subsequently add new data.\n+ */\n+ public void freeze() {\n+ this.frozen = true;\n+ for (Info info : fields.values()) {\n+ info.freeze();\n+ }\n+ }\n+\n+ /**\n+ * Convenience method that efficiently returns the relevance score by\n+ * matching this index against the given Lucene query expression.\n+ *\n+ * @param query\n+ * an arbitrary Lucene query to run against this index\n+ * @return the relevance score of the matchmaking; A number in the range\n+ * [0.0 .. 1.0], with 0.0 indicating no match. The higher the number\n+ * the better the match.\n+ *\n+ */\n+ public float search(Query query) {\n+ if (query == null)\n+ throw new IllegalArgumentException(\"query must not be null\");\n+\n+ IndexSearcher searcher = createSearcher();\n+ try {\n+ final float[] scores = new float[1]; // inits to 0.0f (no match)\n+ searcher.search(query, new SimpleCollector() {\n+ private Scorer scorer;\n+\n+ @Override\n+ public void collect(int doc) throws IOException {\n+ scores[0] = scorer.score();\n+ }\n+\n+ @Override\n+ public void setScorer(Scorer scorer) {\n+ this.scorer = scorer;\n+ }\n+\n+ @Override\n+ public boolean needsScores() {\n+ return true;\n+ }\n+ });\n+ float score = scores[0];\n+ return score;\n+ } catch (IOException e) { // can never happen (RAMDirectory)\n+ throw new RuntimeException(e);\n+ } finally {\n+ // searcher.close();\n+ /*\n+ * Note that it is harmless and important for good performance to\n+ * NOT close the index reader!!! This avoids all sorts of\n+ * unnecessary baggage and locking in the Lucene IndexReader\n+ * superclass, all of which is completely unnecessary for this main\n+ * memory index data structure.\n+ *\n+ * Wishing IndexReader would be an interface...\n+ *\n+ * Actually with the new tight createSearcher() API auto-closing is now\n+ * made impossible, hence searcher.close() would be harmless and also\n+ * would not degrade performance...\n+ */\n+ }\n+ }\n+\n+ /**\n+ * Returns a String representation of the index data for debugging purposes.\n+ *\n+ * @return the string representation\n+ */\n+ @Override\n+ public String toString() {\n+ StringBuilder result = new StringBuilder(256);\n+ int sumPositions = 0;\n+ int sumTerms = 0;\n+ final BytesRef spare = new BytesRef();\n+ for (Map.Entry<String, Info> entry : fields.entrySet()) {\n+ String fieldName = entry.getKey();\n+ Info info = entry.getValue();\n+ info.sortTerms();\n+ result.append(fieldName + \":\\n\");\n+ SliceByteStartArray sliceArray = info.sliceArray;\n+ int numPositions = 0;\n+ SliceReader postingsReader = new SliceReader(intBlockPool);\n+ for (int j = 0; j < info.terms.size(); j++) {\n+ int ord = info.sortedTerms[j];\n+ info.terms.get(ord, spare);\n+ int freq = sliceArray.freq[ord];\n+ result.append(\"\\t'\" + spare + \"':\" + freq + \":\");\n+ postingsReader.reset(sliceArray.start[ord], sliceArray.end[ord]);\n+ result.append(\" [\");\n+ final int iters = storeOffsets ? 3 : 1;\n+ while (!postingsReader.endOfSlice()) {\n+ result.append(\"(\");\n+\n+ for (int k = 0; k < iters; k++) {\n+ result.append(postingsReader.readInt());\n+ if (k < iters - 1) {\n+ result.append(\", \");\n+ }\n+ }\n+ result.append(\")\");\n+ if (!postingsReader.endOfSlice()) {\n+ result.append(\",\");\n+ }\n+\n+ }\n+ result.append(\"]\");\n+ result.append(\"\\n\");\n+ numPositions += freq;\n+ }\n+\n+ result.append(\"\\tterms=\" + info.terms.size());\n+ result.append(\", positions=\" + numPositions);\n+ result.append(\"\\n\");\n+ sumPositions += numPositions;\n+ sumTerms += info.terms.size();\n+ }\n+\n+ result.append(\"\\nfields=\" + fields.size());\n+ result.append(\", terms=\" + sumTerms);\n+ result.append(\", positions=\" + sumPositions);\n+ return result.toString();\n+ }\n+\n+ /**\n+ * Index data structure for a field; contains the tokenized term texts and\n+ * their positions.\n+ */\n+ private final class Info {\n+\n+ private FieldInfo fieldInfo;\n+\n+ /** The norms for this field; computed on demand. */\n+ private transient NumericDocValues norms;\n+\n+ /**\n+ * Term strings and their positions for this field: Map &lt;String\n+ * termText, ArrayIntList positions&gt;\n+ */\n+ private BytesRefHash terms; // note unfortunate variable name class with Terms type\n+\n+ private SliceByteStartArray sliceArray;\n+\n+ /** Terms sorted ascending by term text; computed on demand */\n+ private transient int[] sortedTerms;\n+\n+ /** Number of added tokens for this field */\n+ private int numTokens;\n+\n+ /** Number of overlapping tokens for this field */\n+ private int numOverlapTokens;\n+\n+ /** Boost factor for hits for this field */\n+ private float boost;\n+\n+ private long sumTotalTermFreq;\n+\n+ /** the last position encountered in this field for multi field support*/\n+ private int lastPosition;\n+\n+ /** the last offset encountered in this field for multi field support*/\n+ private int lastOffset;\n+\n+ private BinaryDocValuesProducer binaryProducer;\n+\n+ private NumericDocValuesProducer numericProducer;\n+\n+ private boolean preparedDocValues;\n+\n+ private Info(FieldInfo fieldInfo, ByteBlockPool byteBlockPool) {\n+ this.fieldInfo = fieldInfo;\n+ this.sliceArray = new SliceByteStartArray(BytesRefHash.DEFAULT_CAPACITY);\n+ this.terms = new BytesRefHash(byteBlockPool, BytesRefHash.DEFAULT_CAPACITY, sliceArray);;\n+ this.binaryProducer = new BinaryDocValuesProducer();\n+ this.numericProducer = new NumericDocValuesProducer();\n+ }\n+\n+ void freeze() {\n+ sortTerms();\n+ prepareDocValues();\n+ getNormDocValues();\n+ }\n+\n+ /**\n+ * Sorts hashed terms into ascending order, reusing memory along the\n+ * way. Note that sorting is lazily delayed until required (often it's\n+ * not required at all). If a sorted view is required then hashing +\n+ * sort + binary search is still faster and smaller than TreeMap usage\n+ * (which would be an alternative and somewhat more elegant approach,\n+ * apart from more sophisticated Tries / prefix trees).\n+ */\n+ void sortTerms() {\n+ if (sortedTerms == null) {\n+ sortedTerms = terms.sort(BytesRef.getUTF8SortedAsUnicodeComparator());\n+ }\n+ }\n+\n+ void prepareDocValues() {\n+ if (preparedDocValues == false) {\n+ DocValuesType dvType = fieldInfo.getDocValuesType();\n+ if (dvType == DocValuesType.NUMERIC || dvType == DocValuesType.SORTED_NUMERIC) {\n+ numericProducer.prepareForUsage();\n+ }\n+ if (dvType == DocValuesType.BINARY || dvType == DocValuesType.SORTED || dvType == DocValuesType.SORTED_SET) {\n+ binaryProducer.prepareForUsage();\n+ }\n+ preparedDocValues = true;\n+ }\n+ }\n+\n+ NumericDocValues getNormDocValues() {\n+ if (norms == null) {\n+ FieldInvertState invertState = new FieldInvertState(fieldInfo.name, fieldInfo.number,\n+ numTokens, numOverlapTokens, 0, boost);\n+ final long value = normSimilarity.computeNorm(invertState);\n+ if (DEBUG) System.err.println(\"MemoryIndexReader.norms: \" + fieldInfo.name + \":\" + value + \":\" + numTokens);\n+ norms = new NumericDocValues() {\n+\n+ @Override\n+ public long get(int docID) {\n+ if (docID != 0)\n+ throw new IndexOutOfBoundsException();\n+ else\n+ return value;\n+ }\n+\n+ };\n+ }\n+ return norms;\n+ }\n+ }\n+\n+ ///////////////////////////////////////////////////////////////////////////////\n+ // Nested classes:\n+ ///////////////////////////////////////////////////////////////////////////////\n+\n+ private static final class BinaryDocValuesProducer {\n+\n+ BytesRefHash dvBytesValuesSet;\n+ final SortedDocValues sortedDocValues;\n+ final BytesRef spare = new BytesRef();\n+\n+ int[] bytesIds;\n+\n+ private BinaryDocValuesProducer() {\n+ sortedDocValues = new SortedDocValues() {\n+ @Override\n+ public int getOrd(int docID) {\n+ return 0;\n+ }\n+\n+ @Override\n+ public BytesRef lookupOrd(int ord) {\n+ return getValue(ord);\n+ }\n+\n+ @Override\n+ public int getValueCount() {\n+ return 1;\n+ }\n+ };\n+ }\n+\n+ private void prepareForUsage() {\n+ bytesIds = dvBytesValuesSet.sort(BytesRef.getUTF8SortedAsUnicodeComparator());\n+ }\n+\n+ private BytesRef getValue(int index) {\n+ return dvBytesValuesSet.get(bytesIds[index], spare);\n+ }\n+\n+ }\n+\n+ private static final class NumericDocValuesProducer {\n+\n+ long[] dvLongValues;\n+ int count;\n+\n+ final NumericDocValues numericDocValues;\n+ final SortedNumericDocValues sortedNumericDocValues;\n+\n+ private NumericDocValuesProducer() {\n+ this.numericDocValues = new NumericDocValues() {\n+ @Override\n+ public long get(int docID) {\n+ return dvLongValues[0];\n+ }\n+ };\n+ this.sortedNumericDocValues = new SortedNumericDocValues() {\n+ @Override\n+ public void setDocument(int doc) {\n+ }\n+\n+ @Override\n+ public long valueAt(int index) {\n+ return dvLongValues[index];\n+ }\n+\n+ @Override\n+ public int count() {\n+ return count;\n+ }\n+ };\n+ }\n+\n+ private void prepareForUsage() {\n+ Arrays.sort(dvLongValues, 0, count);\n+ }\n+ }\n+\n+ /**\n+ * Search support for Lucene framework integration; implements all methods\n+ * required by the Lucene IndexReader contracts.\n+ */\n+ private final class MemoryIndexReader extends LeafReader {\n+\n+ private MemoryIndexReader() {\n+ super(); // avoid as much superclass baggage as possible\n+ for (Info info : fields.values()) {\n+ info.prepareDocValues();\n+ }\n+ }\n+\n+ @Override\n+ public void addCoreClosedListener(CoreClosedListener listener) {\n+ addCoreClosedListenerAsReaderClosedListener(this, listener);\n+ }\n+\n+ @Override\n+ public void removeCoreClosedListener(CoreClosedListener listener) {\n+ removeCoreClosedListenerAsReaderClosedListener(this, listener);\n+ }\n+\n+ private Info getInfoForExpectedDocValuesType(String fieldName, DocValuesType expectedType) {\n+ if (expectedType == DocValuesType.NONE) {\n+ return null;\n+ }\n+ Info info = fields.get(fieldName);\n+ if (info == null) {\n+ return null;\n+ }\n+ if (info.fieldInfo.getDocValuesType() != expectedType) {\n+ return null;\n+ }\n+ return info;\n+ }\n+\n+ @Override\n+ public Bits getLiveDocs() {\n+ return null;\n+ }\n+\n+ @Override\n+ public FieldInfos getFieldInfos() {\n+ FieldInfo[] fieldInfos = new FieldInfo[fields.size()];\n+ int i = 0;\n+ for (Info info : fields.values()) {\n+ fieldInfos[i++] = info.fieldInfo;\n+ }\n+ return new FieldInfos(fieldInfos);\n+ }\n+\n+ @Override\n+ public NumericDocValues getNumericDocValues(String field) {\n+ Info info = getInfoForExpectedDocValuesType(field, DocValuesType.NUMERIC);\n+ if (info != null) {\n+ return info.numericProducer.numericDocValues;\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ @Override\n+ public BinaryDocValues getBinaryDocValues(String field) {\n+ return getSortedDocValues(field, DocValuesType.BINARY);\n+ }\n+\n+ @Override\n+ public SortedDocValues getSortedDocValues(String field) {\n+ return getSortedDocValues(field, DocValuesType.SORTED);\n+ }\n+\n+ private SortedDocValues getSortedDocValues(String field, DocValuesType docValuesType) {\n+ Info info = getInfoForExpectedDocValuesType(field, docValuesType);\n+ if (info != null) {\n+ return info.binaryProducer.sortedDocValues;\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ @Override\n+ public SortedNumericDocValues getSortedNumericDocValues(String field) {\n+ Info info = getInfoForExpectedDocValuesType(field, DocValuesType.SORTED_NUMERIC);\n+ if (info != null) {\n+ return info.numericProducer.sortedNumericDocValues;\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ @Override\n+ public SortedSetDocValues getSortedSetDocValues(String field) {\n+ final Info info = getInfoForExpectedDocValuesType(field, DocValuesType.SORTED_SET);\n+ if (info != null) {\n+ return new SortedSetDocValues() {\n+\n+ int index = 0;\n+\n+ @Override\n+ public long nextOrd() {\n+ if (index >= info.binaryProducer.dvBytesValuesSet.size()) {\n+ return NO_MORE_ORDS;\n+ }\n+ return index++;\n+ }\n+\n+ @Override\n+ public void setDocument(int docID) {\n+ index = 0;\n+ }\n+\n+ @Override\n+ public BytesRef lookupOrd(long ord) {\n+ return info.binaryProducer.getValue((int) ord);\n+ }\n+\n+ @Override\n+ public long getValueCount() {\n+ return info.binaryProducer.dvBytesValuesSet.size();\n+ }\n+ };\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ @Override\n+ public Bits getDocsWithField(String field) throws IOException {\n+ Info info = fields.get(field);\n+ if (info != null && info.fieldInfo.getDocValuesType() != DocValuesType.NONE) {\n+ return new Bits.MatchAllBits(1);\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ @Override\n+ public void checkIntegrity() throws IOException {\n+ // no-op\n+ }\n+\n+ @Override\n+ public Fields fields() {\n+ Map<String, Info> filteredFields = new TreeMap<String, Info>();\n+ for (Map.Entry<String, Info> entry : fields.entrySet()) {\n+ if (entry.getValue().numTokens > 0) {\n+ filteredFields.put(entry.getKey(), entry.getValue());\n+ }\n+ }\n+ return new MemoryFields(filteredFields );\n+ }\n+\n+ private class MemoryFields extends Fields {\n+\n+ private final Map<String, Info> fields;\n+\n+ public MemoryFields(Map<String, Info> fields) {\n+ this.fields = fields;\n+ }\n+\n+ @Override\n+ public Iterator<String> iterator() {\n+ return fields.keySet().iterator();\n+ }\n+\n+ @Override\n+ public Terms terms(final String field) {\n+ final Info info = fields.get(field);\n+ if (info == null) {\n+ return null;\n+ }\n+\n+ return new Terms() {\n+ @Override\n+ public TermsEnum iterator() {\n+ return new MemoryTermsEnum(info);\n+ }\n+\n+ @Override\n+ public long size() {\n+ return info.terms.size();\n+ }\n+\n+ @Override\n+ public long getSumTotalTermFreq() {\n+ return info.sumTotalTermFreq;\n+ }\n+\n+ @Override\n+ public long getSumDocFreq() {\n+ // each term has df=1\n+ return info.terms.size();\n+ }\n+\n+ @Override\n+ public int getDocCount() {\n+ return size() > 0 ? 1 : 0;\n+ }\n+\n+ @Override\n+ public boolean hasFreqs() {\n+ return true;\n+ }\n+\n+ @Override\n+ public boolean hasOffsets() {\n+ return storeOffsets;\n+ }\n+\n+ @Override\n+ public boolean hasPositions() {\n+ return true;\n+ }\n+\n+ @Override\n+ public boolean hasPayloads() {\n+ return storePayloads;\n+ }\n+ };\n+ }\n+\n+ @Override\n+ public int size() {\n+ return fields.size();\n+ }\n+ }\n+\n+ private class MemoryTermsEnum extends TermsEnum {\n+ private final Info info;\n+ private final BytesRef br = new BytesRef();\n+ int termUpto = -1;\n+\n+ public MemoryTermsEnum(Info info) {\n+ this.info = info;\n+ info.sortTerms();\n+ }\n+\n+ private final int binarySearch(BytesRef b, BytesRef bytesRef, int low,\n+ int high, BytesRefHash hash, int[] ords) {\n+ int mid = 0;\n+ while (low <= high) {\n+ mid = (low + high) >>> 1;\n+ hash.get(ords[mid], bytesRef);\n+ final int cmp = bytesRef.compareTo(b);\n+ if (cmp < 0) {\n+ low = mid + 1;\n+ } else if (cmp > 0) {\n+ high = mid - 1;\n+ } else {\n+ return mid;\n+ }\n+ }\n+ assert bytesRef.compareTo(b) != 0;\n+ return -(low + 1);\n+ }\n+\n+\n+ @Override\n+ public boolean seekExact(BytesRef text) {\n+ termUpto = binarySearch(text, br, 0, info.terms.size()-1, info.terms, info.sortedTerms);\n+ return termUpto >= 0;\n+ }\n+\n+ @Override\n+ public SeekStatus seekCeil(BytesRef text) {\n+ termUpto = binarySearch(text, br, 0, info.terms.size()-1, info.terms, info.sortedTerms);\n+ if (termUpto < 0) { // not found; choose successor\n+ termUpto = -termUpto-1;\n+ if (termUpto >= info.terms.size()) {\n+ return SeekStatus.END;\n+ } else {\n+ info.terms.get(info.sortedTerms[termUpto], br);\n+ return SeekStatus.NOT_FOUND;\n+ }\n+ } else {\n+ return SeekStatus.FOUND;\n+ }\n+ }\n+\n+ @Override\n+ public void seekExact(long ord) {\n+ assert ord < info.terms.size();\n+ termUpto = (int) ord;\n+ info.terms.get(info.sortedTerms[termUpto], br);\n+ }\n+\n+ @Override\n+ public BytesRef next() {\n+ termUpto++;\n+ if (termUpto >= info.terms.size()) {\n+ return null;\n+ } else {\n+ info.terms.get(info.sortedTerms[termUpto], br);\n+ return br;\n+ }\n+ }\n+\n+ @Override\n+ public BytesRef term() {\n+ return br;\n+ }\n+\n+ @Override\n+ public long ord() {\n+ return termUpto;\n+ }\n+\n+ @Override\n+ public int docFreq() {\n+ return 1;\n+ }\n+\n+ @Override\n+ public long totalTermFreq() {\n+ return info.sliceArray.freq[info.sortedTerms[termUpto]];\n+ }\n+\n+ @Override\n+ public PostingsEnum postings(PostingsEnum reuse, int flags) {\n+ if (reuse == null || !(reuse instanceof MemoryPostingsEnum)) {\n+ reuse = new MemoryPostingsEnum();\n+ }\n+ final int ord = info.sortedTerms[termUpto];\n+ return ((MemoryPostingsEnum) reuse).reset(info.sliceArray.start[ord], info.sliceArray.end[ord], info.sliceArray.freq[ord]);\n+ }\n+\n+ @Override\n+ public void seekExact(BytesRef term, TermState state) throws IOException {\n+ assert state != null;\n+ this.seekExact(((OrdTermState)state).ord);\n+ }\n+\n+ @Override\n+ public TermState termState() throws IOException {\n+ OrdTermState ts = new OrdTermState();\n+ ts.ord = termUpto;\n+ return ts;\n+ }\n+ }\n+\n+ private class MemoryPostingsEnum extends PostingsEnum {\n+\n+ private final SliceReader sliceReader;\n+ private int posUpto; // for assert\n+ private boolean hasNext;\n+ private int doc = -1;\n+ private int freq;\n+ private int pos;\n+ private int startOffset;\n+ private int endOffset;\n+ private int payloadIndex;\n+ private final BytesRefBuilder payloadBuilder;//only non-null when storePayloads\n+\n+ public MemoryPostingsEnum() {\n+ this.sliceReader = new SliceReader(intBlockPool);\n+ this.payloadBuilder = storePayloads ? new BytesRefBuilder() : null;\n+ }\n+\n+ public PostingsEnum reset(int start, int end, int freq) {\n+ this.sliceReader.reset(start, end);\n+ posUpto = 0; // for assert\n+ hasNext = true;\n+ doc = -1;\n+ this.freq = freq;\n+ return this;\n+ }\n+\n+\n+ @Override\n+ public int docID() {\n+ return doc;\n+ }\n+\n+ @Override\n+ public int nextDoc() {\n+ pos = -1;\n+ if (hasNext) {\n+ hasNext = false;\n+ return doc = 0;\n+ } else {\n+ return doc = NO_MORE_DOCS;\n+ }\n+ }\n+\n+ @Override\n+ public int advance(int target) throws IOException {\n+ return slowAdvance(target);\n+ }\n+\n+ @Override\n+ public int freq() throws IOException {\n+ return freq;\n+ }\n+\n+ @Override\n+ public int nextPosition() {\n+ posUpto++;\n+ assert posUpto <= freq;\n+ assert !sliceReader.endOfSlice() : \" stores offsets : \" + startOffset;\n+ int pos = sliceReader.readInt();\n+ if (storeOffsets) {\n+ //pos = sliceReader.readInt();\n+ startOffset = sliceReader.readInt();\n+ endOffset = sliceReader.readInt();\n+ }\n+ if (storePayloads) {\n+ payloadIndex = sliceReader.readInt();\n+ }\n+ return pos;\n+ }\n+\n+ @Override\n+ public int startOffset() {\n+ return startOffset;\n+ }\n+\n+ @Override\n+ public int endOffset() {\n+ return endOffset;\n+ }\n+\n+ @Override\n+ public BytesRef getPayload() {\n+ if (payloadBuilder == null || payloadIndex == -1) {\n+ return null;\n+ }\n+ return payloadsBytesRefs.get(payloadBuilder, payloadIndex);\n+ }\n+\n+ @Override\n+ public long cost() {\n+ return 1;\n+ }\n+ }\n+\n+ @Override\n+ public Fields getTermVectors(int docID) {\n+ if (docID == 0) {\n+ return fields();\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ @Override\n+ public int numDocs() {\n+ if (DEBUG) System.err.println(\"MemoryIndexReader.numDocs\");\n+ return 1;\n+ }\n+\n+ @Override\n+ public int maxDoc() {\n+ if (DEBUG) System.err.println(\"MemoryIndexReader.maxDoc\");\n+ return 1;\n+ }\n+\n+ @Override\n+ public void document(int docID, StoredFieldVisitor visitor) {\n+ if (DEBUG) System.err.println(\"MemoryIndexReader.document\");\n+ // no-op: there are no stored fields\n+ }\n+\n+ @Override\n+ protected void doClose() {\n+ if (DEBUG) System.err.println(\"MemoryIndexReader.doClose\");\n+ }\n+\n+ @Override\n+ public NumericDocValues getNormValues(String field) {\n+ Info info = fields.get(field);\n+ if (info == null) {\n+ return null;\n+ }\n+ return info.getNormDocValues();\n+ }\n+\n+ }\n+\n+ /**\n+ * Resets the {@link MemoryIndex} to its initial state and recycles all internal buffers.\n+ */\n+ public void reset() {\n+ fields.clear();\n+ this.normSimilarity = IndexSearcher.getDefaultSimilarity();\n+ byteBlockPool.reset(false, false); // no need to 0-fill the buffers\n+ intBlockPool.reset(true, false); // here must must 0-fill since we use slices\n+ if (payloadsBytesRefs != null) {\n+ payloadsBytesRefs.clear();\n+ }\n+ this.frozen = false;\n+ }\n+\n+ private static final class SliceByteStartArray extends DirectBytesStartArray {\n+ int[] start; // the start offset in the IntBlockPool per term\n+ int[] end; // the end pointer in the IntBlockPool for the postings slice per term\n+ int[] freq; // the term frequency\n+\n+ public SliceByteStartArray(int initSize) {\n+ super(initSize);\n+ }\n+\n+ @Override\n+ public int[] init() {\n+ final int[] ord = super.init();\n+ start = new int[ArrayUtil.oversize(ord.length, RamUsageEstimator.NUM_BYTES_INT)];\n+ end = new int[ArrayUtil.oversize(ord.length, RamUsageEstimator.NUM_BYTES_INT)];\n+ freq = new int[ArrayUtil.oversize(ord.length, RamUsageEstimator.NUM_BYTES_INT)];\n+ assert start.length >= ord.length;\n+ assert end.length >= ord.length;\n+ assert freq.length >= ord.length;\n+ return ord;\n+ }\n+\n+ @Override\n+ public int[] grow() {\n+ final int[] ord = super.grow();\n+ if (start.length < ord.length) {\n+ start = ArrayUtil.grow(start, ord.length);\n+ end = ArrayUtil.grow(end, ord.length);\n+ freq = ArrayUtil.grow(freq, ord.length);\n+ }\n+ assert start.length >= ord.length;\n+ assert end.length >= ord.length;\n+ assert freq.length >= ord.length;\n+ return ord;\n+ }\n+\n+ @Override\n+ public int[] clear() {\n+ start = end = null;\n+ return super.clear();\n+ }\n+\n+ }\n+}", "filename": "core/src/main/java/org/apache/lucene/index/memory/ForkedMemoryIndex.java", "status": "added" }, { "diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.index.*;\n+import org.apache.lucene.index.memory.ForkedMemoryIndex;\n import org.apache.lucene.index.memory.MemoryIndex;\n import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.util.CloseableThreadLocal;\n@@ -42,9 +43,9 @@\n */\n class MultiDocumentPercolatorIndex implements PercolatorIndex {\n \n- private final CloseableThreadLocal<MemoryIndex> cache;\n+ private final CloseableThreadLocal<ForkedMemoryIndex> cache;\n \n- MultiDocumentPercolatorIndex(CloseableThreadLocal<MemoryIndex> cache) {\n+ MultiDocumentPercolatorIndex(CloseableThreadLocal<ForkedMemoryIndex> cache) {\n this.cache = cache;\n }\n \n@@ -54,16 +55,16 @@ public void prepare(PercolateContext context, ParsedDocument parsedDocument) {\n List<ParseContext.Document> docs = parsedDocument.docs();\n int rootDocIndex = docs.size() - 1;\n assert rootDocIndex > 0;\n- MemoryIndex rootDocMemoryIndex = null;\n+ ForkedMemoryIndex rootDocMemoryIndex = null;\n for (int i = 0; i < docs.size(); i++) {\n ParseContext.Document d = docs.get(i);\n- MemoryIndex memoryIndex;\n+ ForkedMemoryIndex memoryIndex;\n if (rootDocIndex == i) {\n // the last doc is always the rootDoc, since that is usually the biggest document it make sense\n // to reuse the MemoryIndex it uses\n memoryIndex = rootDocMemoryIndex = cache.get();\n } else {\n- memoryIndex = new MemoryIndex(true);\n+ memoryIndex = new ForkedMemoryIndex(true);\n }\n Analyzer analyzer = context.mapperService().documentMapper(parsedDocument.type()).mappers().indexAnalyzer();\n memoryIndices[i] = indexDoc(d, analyzer, memoryIndex).createSearcher().getIndexReader();\n@@ -80,31 +81,21 @@ public void prepare(PercolateContext context, ParsedDocument parsedDocument) {\n }\n }\n \n- MemoryIndex indexDoc(ParseContext.Document d, Analyzer analyzer, MemoryIndex memoryIndex) {\n+ ForkedMemoryIndex indexDoc(ParseContext.Document d, Analyzer analyzer, ForkedMemoryIndex memoryIndex) {\n for (IndexableField field : d.getFields()) {\n- if (field.fieldType().indexOptions() == IndexOptions.NONE && field.name().equals(UidFieldMapper.NAME)) {\n+ if (field.name().equals(UidFieldMapper.NAME)) {\n continue;\n }\n- try {\n- // TODO: instead of passing null here, we can have a CTL<Map<String,TokenStream>> and pass previous,\n- // like the indexer does\n- try (TokenStream tokenStream = field.tokenStream(analyzer, null)) {\n- if (tokenStream != null) {\n- memoryIndex.addField(field.name(), tokenStream, field.boost());\n- }\n- }\n- } catch (IOException e) {\n- throw new ElasticsearchException(\"Failed to create token stream\", e);\n- }\n+ memoryIndex.addField(field, analyzer);\n }\n return memoryIndex;\n }\n \n private class DocSearcher extends Engine.Searcher {\n \n- private final MemoryIndex rootDocMemoryIndex;\n+ private final ForkedMemoryIndex rootDocMemoryIndex;\n \n- private DocSearcher(IndexSearcher searcher, MemoryIndex rootDocMemoryIndex) {\n+ private DocSearcher(IndexSearcher searcher, ForkedMemoryIndex rootDocMemoryIndex) {\n super(\"percolate\", searcher);\n this.rootDocMemoryIndex = rootDocMemoryIndex;\n }", "filename": "core/src/main/java/org/elasticsearch/percolator/MultiDocumentPercolatorIndex.java", "status": "modified" }, { "diff": "@@ -22,8 +22,7 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.ReaderUtil;\n-import org.apache.lucene.index.memory.ExtendedMemoryIndex;\n-import org.apache.lucene.index.memory.MemoryIndex;\n+import org.apache.lucene.index.memory.ForkedMemoryIndex;\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.search.BooleanQuery;\n@@ -126,7 +125,7 @@ public class PercolatorService extends AbstractComponent {\n private final ScriptService scriptService;\n private final MappingUpdatedAction mappingUpdatedAction;\n \n- private final CloseableThreadLocal<MemoryIndex> cache;\n+ private final CloseableThreadLocal<ForkedMemoryIndex> cache;\n \n private final ParseFieldMatcher parseFieldMatcher;\n \n@@ -150,11 +149,11 @@ public PercolatorService(Settings settings, IndexNameExpressionResolver indexNam\n this.sortParseElement = new SortParseElement();\n \n final long maxReuseBytes = settings.getAsBytesSize(\"indices.memory.memory_index.size_per_thread\", new ByteSizeValue(1, ByteSizeUnit.MB)).bytes();\n- cache = new CloseableThreadLocal<MemoryIndex>() {\n+ cache = new CloseableThreadLocal<ForkedMemoryIndex>() {\n @Override\n- protected MemoryIndex initialValue() {\n+ protected ForkedMemoryIndex initialValue() {\n // TODO: should we expose payloads as an option? should offsets be turned on always?\n- return new ExtendedMemoryIndex(true, false, maxReuseBytes);\n+ return new ForkedMemoryIndex(true, false, maxReuseBytes);\n }\n };\n single = new SingleDocumentPercolatorIndex(cache);", "filename": "core/src/main/java/org/elasticsearch/percolator/PercolatorService.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexableField;\n+import org.apache.lucene.index.memory.ForkedMemoryIndex;\n import org.apache.lucene.index.memory.MemoryIndex;\n import org.apache.lucene.util.CloseableThreadLocal;\n import org.elasticsearch.ElasticsearchException;\n@@ -39,28 +40,22 @@\n */\n class SingleDocumentPercolatorIndex implements PercolatorIndex {\n \n- private final CloseableThreadLocal<MemoryIndex> cache;\n+ private final CloseableThreadLocal<ForkedMemoryIndex> cache;\n \n- SingleDocumentPercolatorIndex(CloseableThreadLocal<MemoryIndex> cache) {\n+ SingleDocumentPercolatorIndex(CloseableThreadLocal<ForkedMemoryIndex> cache) {\n this.cache = cache;\n }\n \n @Override\n public void prepare(PercolateContext context, ParsedDocument parsedDocument) {\n- MemoryIndex memoryIndex = cache.get();\n+ ForkedMemoryIndex memoryIndex = cache.get();\n for (IndexableField field : parsedDocument.rootDoc().getFields()) {\n- if (field.fieldType().indexOptions() == IndexOptions.NONE && field.name().equals(UidFieldMapper.NAME)) {\n+ if (field.name().equals(UidFieldMapper.NAME)) {\n continue;\n }\n try {\n Analyzer analyzer = context.mapperService().documentMapper(parsedDocument.type()).mappers().indexAnalyzer();\n- // TODO: instead of passing null here, we can have a CTL<Map<String,TokenStream>> and pass previous,\n- // like the indexer does\n- try (TokenStream tokenStream = field.tokenStream(analyzer, null)) {\n- if (tokenStream != null) {\n- memoryIndex.addField(field.name(), tokenStream, field.boost());\n- }\n- }\n+ memoryIndex.addField(field, analyzer);\n } catch (Exception e) {\n throw new ElasticsearchException(\"Failed to create token stream for [\" + field.name() + \"]\", e);\n }\n@@ -70,9 +65,9 @@ public void prepare(PercolateContext context, ParsedDocument parsedDocument) {\n \n private class DocEngineSearcher extends Engine.Searcher {\n \n- private final MemoryIndex memoryIndex;\n+ private final ForkedMemoryIndex memoryIndex;\n \n- public DocEngineSearcher(MemoryIndex memoryIndex) {\n+ public DocEngineSearcher(ForkedMemoryIndex memoryIndex) {\n super(\"percolate\", memoryIndex.createSearcher());\n this.memoryIndex = memoryIndex;\n }", "filename": "core/src/main/java/org/elasticsearch/percolator/SingleDocumentPercolatorIndex.java", "status": "modified" }, { "diff": "@@ -2067,5 +2067,30 @@ public void testGeoShapeWithMapUnmappedFieldAsString() throws Exception {\n assertThat(response1.getMatches()[0].getId().string(), equalTo(\"1\"));\n }\n \n+ @Test\n+ public void testGeoPoint() throws Exception {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"type\", \"location\", \"type=geo_point\"));\n+ client().prepareIndex(\"test\", PercolatorService.TYPE_NAME, \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"query\", geoBoundingBoxQuery(\"location\")\n+ .topLeft(42.36, -71.09)\n+ .bottomRight(42.355, -71.085)\n+ ).endObject())\n+ .get();\n+ refresh();\n+\n+ PercolateResponse response1 = client().preparePercolate()\n+ .setIndices(\"test\").setDocumentType(\"type\")\n+ .setPercolateDoc(docBuilder().setDoc(jsonBuilder().startObject()\n+ .startObject(\"location\")\n+ .field(\"lon\", -71.0875)\n+ .field(\"lat\", 42.3575)\n+ .endObject()))\n+ .execute().actionGet();\n+ assertMatchCount(response1, 1L);\n+ assertThat(response1.getMatches().length, equalTo(1));\n+ assertThat(response1.getMatches()[0].getId().string(), equalTo(\"1\"));\n+ }\n+\n }\n ", "filename": "core/src/test/java/org/elasticsearch/percolator/PercolatorIT.java", "status": "modified" } ] }
{ "body": "Using a script in the percentile aggregation in Kibana 4, this results in an Elasticsearch NullPointerException. [Example](https://www.dropbox.com/s/op3kzha6qwjlbpe/Screenshot%202015-10-23%2014.08.17.png?dl=0). This seems to be caused by Kibana including both the field and script property. It is not possible to exclude the field property from Kibana.\n\nFor the sake of simplicity I narrowed this down to a plain simple ES query. Elasticsearch 1.7.3 on a bare CentOS 7.1 box installed using the yum repo. No changes have been made to the ES config.\n\nBe sure to have one document in an index with a numerical field.\n\n```\n$ curl -XDELETE localhost:9200/test\n\n$ curl -XPUT localhost:9200/test/test/1?pretty=true -d '{\n \"metric\": 100\n}'\n\n{\n \"_index\" : \"test\",\n \"_type\" : \"test\",\n \"_id\" : \"1\",\n \"_version\" : 1,\n \"created\" : true\n}\n```\n\nMapping has become a long value, which is fine for percentile agg\n\n```\n$ curl -XGET localhost:9200/test/_mapping?pretty=true\n\n{\n \"test\" : {\n \"mappings\" : {\n \"test\" : {\n \"properties\" : {\n \"metric\" : {\n \"type\" : \"long\"\n }\n }\n }\n }\n }\n}\n```\n\nRunning a query with both the field and script properties supplied. Results in a NullPointerException. I expected that the script would have precedence.\n\n```\n$ curl -XPOST localhost:9200/test/test/_search?pretty=true -d '{\n \"size\": 0,\n \"query\": {\n \"match_all\": {}\n },\n \"aggs\": {\n \"some-agg\": {\n \"percentiles\": {\n \"field\": \"metric\",\n \"lang\": \"expression\",\n \"script\": \"1\"\n }\n }\n }\n}'\n\n{\n \"took\" : 4,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 4,\n \"failed\" : 1,\n \"failures\" : [ {\n \"index\" : \"test\",\n \"shard\" : 2,\n \"status\" : 500,\n \"reason\" : \"QueryPhaseExecutionException[[test][2]: query[ConstantScore(cache(_type:test))],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: NullPointerException; \"\n } ]\n },\n \"hits\" : {\n \"total\" : 0,\n \"max_score\" : 0.0,\n \"hits\" : [ ]\n },\n \"aggregations\" : {\n \"some-agg\" : {\n \"values\" : {\n \"1.0\" : \"NaN\",\n \"5.0\" : \"NaN\",\n \"25.0\" : \"NaN\",\n \"50.0\" : \"NaN\",\n \"75.0\" : \"NaN\",\n \"95.0\" : \"NaN\",\n \"99.0\" : \"NaN\"\n }\n }\n }\n}\n```\n", "comments": [ { "body": "PS; this is what is included in the Elasticsearch log file:\n\n```\n[2015-10-23 16:29:19,142][DEBUG][action.search.type ] [Pip the Troll] [test][2], node[YWWDb9UsQEGefx8R1f_Urw], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@35c01a59] lastShard [true]\norg.elasticsearch.search.query.QueryPhaseExecutionException: [test][2]: query[ConstantScore(cache(_type:test))],from[0],size[0]: Query Failed [Failed to execute main query]\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:163)\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:301)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:312)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.script.expression.ExpressionScript.setNextVar(ExpressionScript.java:109)\n at org.elasticsearch.search.aggregations.support.ValuesSource$Numeric$WithScript$DoubleValues.setDocument(ValuesSource.java:513)\n at org.elasticsearch.search.aggregations.metrics.percentiles.AbstractPercentilesAggregator.collect(AbstractPercentilesAggregator.java:83)\n at org.elasticsearch.search.aggregations.AggregationPhase$AggregationsCollector.collect(AggregationPhase.java:161)\n at org.elasticsearch.common.lucene.MultiCollector.collect(MultiCollector.java:60)\n at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:193)\n at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)\n at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)\n at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:191)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:117)\n ... 8 more\n```\n", "created_at": "2015-10-23T14:30:15Z" }, { "body": "Note: this script works correctly with Groovy\n", "created_at": "2015-10-23T18:40:38Z" } ], "number": 14262, "title": "NullPointerException when combining field and script in aggregation" }
{ "body": "FIx #14262\n", "number": 17091, "review_comments": [], "title": "Check that _value is used in aggregations script before setting value to specialValue" }
{ "commits": [ { "message": "Check that _value is used in aggregations script before setting value to specialValue #14262" } ], "files": [ { "diff": "@@ -112,14 +112,16 @@ public void setSource(Map<String, Object> source) {\n \n @Override\n public void setNextVar(String name, Object value) {\n- assert(specialValue != null);\n // this should only be used for the special \"_value\" variable used in aggregations\n assert(name.equals(\"_value\"));\n \n- if (value instanceof Number) {\n- specialValue.setValue(((Number)value).doubleValue());\n- } else {\n- throw new ScriptException(\"Cannot use expression with text variable using \" + compiledScript);\n+ // _value isn't used in script if specialValue == null\n+ if (specialValue != null) {\n+ if (value instanceof Number) {\n+ specialValue.setValue(((Number)value).doubleValue());\n+ } else {\n+ throw new ScriptException(\"Cannot use expression with text variable using \" + compiledScript);\n+ }\n }\n }\n };", "filename": "modules/lang-expression/src/main/java/org/elasticsearch/script/expression/ExpressionSearchScript.java", "status": "modified" }, { "diff": "@@ -383,7 +383,11 @@ public void testSpecialValueVariable() throws Exception {\n .script(new Script(\"_value * 3\", ScriptType.INLINE, ExpressionScriptEngineService.NAME, null)))\n .addAggregation(\n AggregationBuilders.stats(\"double_agg\").field(\"y\")\n- .script(new Script(\"_value - 1.1\", ScriptType.INLINE, ExpressionScriptEngineService.NAME, null)));\n+ .script(new Script(\"_value - 1.1\", ScriptType.INLINE, ExpressionScriptEngineService.NAME, null)))\n+ .addAggregation(\n+ AggregationBuilders.stats(\"const_agg\").field(\"x\")\n+ .script(new Script(\"3.0\", ScriptType.INLINE, ExpressionScriptEngineService.NAME, null))\n+ );\n \n SearchResponse rsp = req.get();\n assertEquals(3, rsp.getHits().getTotalHits());\n@@ -395,6 +399,11 @@ public void testSpecialValueVariable() throws Exception {\n stats = rsp.getAggregations().get(\"double_agg\");\n assertEquals(0.7, stats.getMax(), 0.0001);\n assertEquals(0.1, stats.getMin(), 0.0001);\n+\n+ stats = rsp.getAggregations().get(\"const_agg\");\n+ assertThat(stats.getMax(), equalTo(3.0));\n+ assertThat(stats.getMin(), equalTo(3.0));\n+ assertThat(stats.getAvg(), equalTo(3.0));\n }\n \n public void testStringSpecialValueVariable() throws Exception {", "filename": "modules/lang-expression/src/test/java/org/elasticsearch/script/expression/MoreExpressionTests.java", "status": "modified" } ] }
{ "body": "I just tried to build the debian package and start it on ubuntu 12 LTS. This is the error I got, when running the start-stop-daemon command manually, otherwise the error is concealed.\n\n``` bash\nstart-stop-daemon -d /usr/share/elasticsearch --start --user elasticsearch -c elasticsearch --pidfile /var/run/elasticsearch/elasticsearch.pid --exec /usr/share/elasticsearch/bin/elasticsearch -- -d -p /var/run/elasticsearch/elasticsearch.pid --default.path.home=/usr/share/elasticsearch --default.path.logs=/var/log/elasticsearch --default.path.data=/var/lib/elasticsearch --default.path.conf=/etc/elasticsearch\nroot@vagrant-ubuntu-precise-64:~# Starts elasticsearch\n\nOption Description\n------ -----------\n-D Configures an Elasticsearch setting\n-V, --version Prints elasticsearch version\n information and exits\n-d, --daemonize Starts Elasticsearch in the background\n-h, --help show help\n-p, --pidfile Creates a pid file in the specified\n path on start\n-s, --silent show minimal output\n-v, --verbose show verbose output\nERROR: default.path.home is not a recognized option\n```\n", "comments": [ { "body": "looks similar for the RPM, but did not test it\n", "created_at": "2016-03-13T16:19:52Z" }, { "body": "This is a consequence of #17024 and I opened #17087 to address. Note that even with the fix for this particular issue, starting Elasticsearch as a service will still fail because of #16579.\n", "created_at": "2016-03-13T19:52:45Z" }, { "body": "Closed by #17087.\n", "created_at": "2016-03-14T00:18:31Z" } ], "number": 17084, "title": "Debian package does not start in master branch" }
{ "body": "This commit addresses an issue in the init scripts which are passing\ninvalid command line arguments to the startup script.\n\nCloses #17084\n", "number": 17087, "review_comments": [], "title": "Do not pass double-dash arguments on startup" }
{ "commits": [ { "message": "Do not pass double-dash arguments on startup\n\nThis commit addresses an issue in the init scripts which are passing\ninvalid command line arguments to the startup script." } ], "files": [ { "diff": "@@ -99,7 +99,7 @@ fi\n # Define other required variables\n PID_FILE=\"$PID_DIR/$NAME.pid\"\n DAEMON=$ES_HOME/bin/elasticsearch\n-DAEMON_OPTS=\"-d -p $PID_FILE --default.path.home=$ES_HOME --default.path.logs=$LOG_DIR --default.path.data=$DATA_DIR --default.path.conf=$CONF_DIR\"\n+DAEMON_OPTS=\"-d -p $PID_FILE -D es.default.path.home=$ES_HOME -D es.default.path.logs=$LOG_DIR -D es.default.path.data=$DATA_DIR -D es.default.path.conf=$CONF_DIR\"\n \n export ES_HEAP_SIZE\n export ES_HEAP_NEWSIZE", "filename": "distribution/deb/src/main/packaging/init.d/elasticsearch", "status": "modified" }, { "diff": "@@ -117,7 +117,7 @@ start() {\n cd $ES_HOME\n echo -n $\"Starting $prog: \"\n # if not running, start it up here, usually something like \"daemon $exec\"\n- daemon --user $ES_USER --pidfile $pidfile $exec -p $pidfile -d -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.path.conf=$CONF_DIR\n+ daemon --user $ES_USER --pidfile $pidfile $exec -p $pidfile -d -D es.default.path.home=$ES_HOME -D es.default.path.logs=$LOG_DIR -D es.default.path.data=$DATA_DIR -D es.default.path.conf=$CONF_DIR\n retval=$?\n echo\n [ $retval -eq 0 ] && touch $lockfile", "filename": "distribution/rpm/src/main/packaging/init.d/elasticsearch", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.x branch\n\n**OS version**: Fedora 23\n\n**Description of the problem including expected versus actual behavior**:\n\nCurrently our [Fedora-based build jobs for 2.x](https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+2.x+multijob-os-compatibility/os=fedora/) fail with the following message:\n\n```\n[INFO] spawn rpm --define _gpg_name 16E55242 --define _gpg_path /var/lib/jenkins/workspace/elastic+elasticsearch+2.x+multijob-os-compatibility/os/fedora/distribution/src/test/resources/dummyGpg --addsign elasticsearch-2.3.0-SNAPSHOT20160309223154.noarch.rpm\n[INFO] elasticsearch-2.3.0-SNAPSHOT20160309223154.noarch.rpm:\n[INFO] gpg: WARNING: unsafe permissions on homedir '/var/lib/jenkins/workspace/elastic+elasticsearch+2.x+multijob-os-compatibility/os/fedora/distribution/src/test/resources/dummyGpg'\n[INFO] gpg: starting migration from earlier GnuPG versions\n[INFO] gpg: can't connect to the agent: Invalid value passed to IPC\n[INFO] gpg: error: GnuPG agent unusable. Please check that a GnuPG agent can be started.\n[INFO] gpg: migration aborted\n[INFO] gpg: can't connect to the agent: Invalid value passed to IPC\n[INFO] gpg: skipped \"16E55242\": No secret key\n[INFO] gpg: signing failed: No secret key\nBuild was aborted\nFinished: ABORTED\n```\n\nThe root cause that gpg-agent (installed version 2.1.9) creates a Unix socket with the following path name which is too long (at most 108 bytes are allowed):\n\n```\n[jenkins@slave-2f48afe8 dummyGpg]$ gpg-agent --homedir . --daemon -vvv\ngpg-agent[28440]: socket name '/var/lib/jenkins/workspace/elastic+elasticsearch+2.x+multijob-os-compatibility/os/fedora/distribution/src/test/resources/dummyGpg/S.gpg-agent' is too long\n```\n\n**Steps to reproduce**:\n1. Run `mvn verify` on Fedora 23\n", "comments": [ { "body": "A potential workaround for this issue is to add a symlink to a shorter path (e.g. some subdirectory in the tmp folder) and use that instead of the original path. The [GnuPG team is already discussing on shortening the Unix socket path](http://comments.gmane.org/gmane.comp.encryption.gpg.devel/21073) in a future version.\n", "created_at": "2016-03-10T15:26:35Z" }, { "body": "Is there some argument we can give that'll stop trying to use the agent and just use the nasty expect hacks that every other OS uses?\n", "created_at": "2016-03-10T15:43:04Z" }, { "body": "I am not really familiar how the process works but the maven-rpm-plugin provides the [parameters](http://www.mojohaus.org/rpm-maven-plugin/rpm-mojo.html) `keyPassphrase`, `keyname` and `keypath` [which we already set](https://github.com/elastic/elasticsearch/blob/2.x/distribution/rpm/pom.xml#L109-L113). Also, the maven-rpm-plugin [creates an expect script](https://github.com/mojohaus/rpm-maven-plugin/blob/0efd27d3beb64c291b960731fe1bdbbb5dbd938e/src/main/java/org/codehaus/mojo/rpm/RPMSigner.java#L137).\n", "created_at": "2016-03-10T16:06:59Z" }, { "body": "@nik9000: Another \"solution\" to this problem is to deactivate signing on this specific platform with the `skipSign` property that you've introduced recently. I am not sure whether we really want to test dummy-signing on this platform. Especially considering that the symlink mentioned above is also just a workaround IMHO. Wdyt?\n", "created_at": "2016-03-11T08:27:41Z" }, { "body": "I've just learned that we cannot deactivate signing on a specific platform in the current build infrastructure, leaving us with the symlink workaround for the Maven build. :(\n", "created_at": "2016-03-11T10:00:57Z" }, { "body": "re: https://github.com/elastic/elasticsearch/issues/17053#issuecomment-195303351 there could be some refactoring in the jjb jobs but it would essentially break the logic of one job definition for all platforms which is one important part of the simplication/devops style management of the new CI system.\n\nHowever, cc'ing @elasticdog directly here in case he has some other idea.\n", "created_at": "2016-03-11T10:09:12Z" }, { "body": "I looked into bypassing what exactly rpm invokes when signing with gpg and found a rather quite informative answer in the fedora forums.\n\nIn [that answer](https://ask.fedoraproject.org/en/question/56107/can-gpg-agent-be-used-when-signing-rpm-packages/?answer=57111#post-id-57111) the actual gpg sign command is explicitly defined, so as suggested by @nik9000 then I looked at a way to skip the gpg-agent. Unfortunately with gpg2 this is not possible, as `man gpg2` says:\n\n```\n --use-agent\n\n --no-use-agent\n This is dummy option. gpg2 always requires the agent.\n```\n\nStill looking at other hacks we could put in the definition of `__gpg_sign_cmd`\n", "created_at": "2016-03-11T12:03:11Z" }, { "body": "I think this should be closed since #17070 and #17073 were integrated? I think that this was not closed automatically because they were not merged into master, but instead into 2.x. Please reopen if I'm closing this in error.\n", "created_at": "2016-03-12T13:20:37Z" }, { "body": "@jasontedor You were right; this was just an oversight. Thanks for closing.\n", "created_at": "2016-03-14T07:15:50Z" } ], "number": 17053, "title": "rpm signing fails with gpg-agent 2.1.9" }
{ "body": "With this commit we apply the same GPG path shortening logic for\nthe RPM and the DEB packaging modules.\n\nRelates #17053\n", "number": 17073, "review_comments": [ { "body": "I don't think we even try to sign these on windows. Its worth testing there, but I think we skip the whole thing.\n\nHonestly I'd be totally fine skipping deb and rpm entirely on windows.\n", "created_at": "2016-03-11T15:49:38Z" }, { "body": "remove the .dir from the end of this name I think.\n", "created_at": "2016-03-11T15:50:35Z" }, { "body": "We do sign on Windows (at least the deb plugin checks the path name). I can create a separate issue for skipping the package builds on Windows.\n", "created_at": "2016-03-11T15:53:50Z" }, { "body": "Makes sense. I'll correct that.\n", "created_at": "2016-03-11T15:54:07Z" } ], "title": "Shorten GPG keypath for deb packaging" }
{ "commits": [ { "message": "Shorten GPG keypath for deb packaging\n\nWith this commit we apply the same GPG path shortening logic for\nthe RPM and the DEB packaging modules.\n\nRelates #17053" }, { "message": "Allow signing of packages on Windows too" }, { "message": "Remove .dir from Ant script property" } ], "files": [ { "diff": "@@ -0,0 +1,46 @@\n+<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+<project name=\"correct-sign-path\" default=\"shorten-gpg-path\">\n+ <target name=\"check-keypath-override\">\n+ <condition property=\"gpg.keypath.overridden\" value=\"true\" else=\"false\">\n+ <not>\n+ <equals arg1=\"${gpg.keypath}\" arg2=\"${gpg.default.keypath}\"/>\n+ </not>\n+ </condition>\n+\n+ <condition property=\"shorten.gpg.path\" value=\"true\" else=\"false\">\n+ <and>\n+ <isfalse value=\"${gpg.keypath.overridden}\"/>\n+ <os family=\"unix\"/>\n+ </and>\n+ </condition>\n+\n+ <condition property=\"copy.gpg.path\" value=\"true\" else=\"false\">\n+ <and>\n+ <isfalse value=\"${gpg.keypath.overridden}\"/>\n+ <os family=\"windows\"/>\n+ </and>\n+ </condition>\n+\n+ </target>\n+\n+ <!--\n+ Either use a symlink (Unix) to shorten the GPG path or copy the directory (Windows).\n+\n+ In case the gpg.keypath has been overwritten externally we don't do any symlink magic\n+ -->\n+ <target name=\"symlink-gpg-path\" depends=\"check-keypath-override\" if=\"${shorten.gpg.path}\">\n+ <echo level=\"info\" message=\"Symlinking ${gpg.long.keypath} to ${gpg.keypath}\"/>\n+ <symlink link=\"${gpg.keypath}\" resource=\"${gpg.long.keypath}\" overwrite=\"true\"/>\n+ </target>\n+\n+\n+ <target name=\"copy-gpg-path\" depends=\"check-keypath-override\" if=\"${copy.gpg.path}\">\n+ <echo level=\"info\" message=\"Copying ${gpg.long.keypath} to ${gpg.keypath}\"/>\n+ <copy todir=\"${gpg.keypath}\">\n+ <fileset dir=\"${gpg.long.keypath}\"/>\n+ </copy>\n+ </target>\n+\n+ <target name=\"shorten-gpg-path\" depends=\"symlink-gpg-path, copy-gpg-path\"/>\n+\n+</project>", "filename": "distribution/correct-sign-path.xml", "status": "added" }, { "diff": "@@ -303,6 +303,18 @@\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-antrun-plugin</artifactId>\n <executions>\n+ <execution>\n+ <id>shorten-gpg-key-path</id>\n+ <phase>prepare-package</phase>\n+ <goals>\n+ <goal>run</goal>\n+ </goals>\n+ <configuration>\n+ <target>\n+ <ant antfile=\"${packaging.gpg.shortening.ant.script}\"/>\n+ </target>\n+ </configuration>\n+ </execution>\n <!-- start up external cluster -->\n <execution>\n <id>integ-setup</id>", "filename": "distribution/deb/pom.xml", "status": "modified" }, { "diff": "@@ -45,6 +45,7 @@\n <!-- we expect packaging formats to have integration tests, but not unit tests -->\n <skip.unit.tests>true</skip.unit.tests>\n \n+ <packaging.gpg.shortening.ant.script>${project.basedir}/../correct-sign-path.xml</packaging.gpg.shortening.ant.script>\n <!-- By default we sign RPMs and DEBs using a dummy gpg directory that we check into\n source control. Releases will override this with the real information. -->\n <gpg.key>16E55242</gpg.key>", "filename": "distribution/pom.xml", "status": "modified" }, { "diff": "@@ -364,7 +364,7 @@\n </goals>\n <configuration>\n <target>\n- <ant antfile=\"correct-sign-path.xml\" target=\"shorten-gpg-path\"/>\n+ <ant antfile=\"${packaging.gpg.shortening.ant.script}\"/>\n </target>\n </configuration>\n </execution>", "filename": "distribution/rpm/pom.xml", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.x branch\n\n**OS version**: Fedora 23\n\n**Description of the problem including expected versus actual behavior**:\n\nCurrently our [Fedora-based build jobs for 2.x](https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+2.x+multijob-os-compatibility/os=fedora/) fail with the following message:\n\n```\n[INFO] spawn rpm --define _gpg_name 16E55242 --define _gpg_path /var/lib/jenkins/workspace/elastic+elasticsearch+2.x+multijob-os-compatibility/os/fedora/distribution/src/test/resources/dummyGpg --addsign elasticsearch-2.3.0-SNAPSHOT20160309223154.noarch.rpm\n[INFO] elasticsearch-2.3.0-SNAPSHOT20160309223154.noarch.rpm:\n[INFO] gpg: WARNING: unsafe permissions on homedir '/var/lib/jenkins/workspace/elastic+elasticsearch+2.x+multijob-os-compatibility/os/fedora/distribution/src/test/resources/dummyGpg'\n[INFO] gpg: starting migration from earlier GnuPG versions\n[INFO] gpg: can't connect to the agent: Invalid value passed to IPC\n[INFO] gpg: error: GnuPG agent unusable. Please check that a GnuPG agent can be started.\n[INFO] gpg: migration aborted\n[INFO] gpg: can't connect to the agent: Invalid value passed to IPC\n[INFO] gpg: skipped \"16E55242\": No secret key\n[INFO] gpg: signing failed: No secret key\nBuild was aborted\nFinished: ABORTED\n```\n\nThe root cause that gpg-agent (installed version 2.1.9) creates a Unix socket with the following path name which is too long (at most 108 bytes are allowed):\n\n```\n[jenkins@slave-2f48afe8 dummyGpg]$ gpg-agent --homedir . --daemon -vvv\ngpg-agent[28440]: socket name '/var/lib/jenkins/workspace/elastic+elasticsearch+2.x+multijob-os-compatibility/os/fedora/distribution/src/test/resources/dummyGpg/S.gpg-agent' is too long\n```\n\n**Steps to reproduce**:\n1. Run `mvn verify` on Fedora 23\n", "comments": [ { "body": "A potential workaround for this issue is to add a symlink to a shorter path (e.g. some subdirectory in the tmp folder) and use that instead of the original path. The [GnuPG team is already discussing on shortening the Unix socket path](http://comments.gmane.org/gmane.comp.encryption.gpg.devel/21073) in a future version.\n", "created_at": "2016-03-10T15:26:35Z" }, { "body": "Is there some argument we can give that'll stop trying to use the agent and just use the nasty expect hacks that every other OS uses?\n", "created_at": "2016-03-10T15:43:04Z" }, { "body": "I am not really familiar how the process works but the maven-rpm-plugin provides the [parameters](http://www.mojohaus.org/rpm-maven-plugin/rpm-mojo.html) `keyPassphrase`, `keyname` and `keypath` [which we already set](https://github.com/elastic/elasticsearch/blob/2.x/distribution/rpm/pom.xml#L109-L113). Also, the maven-rpm-plugin [creates an expect script](https://github.com/mojohaus/rpm-maven-plugin/blob/0efd27d3beb64c291b960731fe1bdbbb5dbd938e/src/main/java/org/codehaus/mojo/rpm/RPMSigner.java#L137).\n", "created_at": "2016-03-10T16:06:59Z" }, { "body": "@nik9000: Another \"solution\" to this problem is to deactivate signing on this specific platform with the `skipSign` property that you've introduced recently. I am not sure whether we really want to test dummy-signing on this platform. Especially considering that the symlink mentioned above is also just a workaround IMHO. Wdyt?\n", "created_at": "2016-03-11T08:27:41Z" }, { "body": "I've just learned that we cannot deactivate signing on a specific platform in the current build infrastructure, leaving us with the symlink workaround for the Maven build. :(\n", "created_at": "2016-03-11T10:00:57Z" }, { "body": "re: https://github.com/elastic/elasticsearch/issues/17053#issuecomment-195303351 there could be some refactoring in the jjb jobs but it would essentially break the logic of one job definition for all platforms which is one important part of the simplication/devops style management of the new CI system.\n\nHowever, cc'ing @elasticdog directly here in case he has some other idea.\n", "created_at": "2016-03-11T10:09:12Z" }, { "body": "I looked into bypassing what exactly rpm invokes when signing with gpg and found a rather quite informative answer in the fedora forums.\n\nIn [that answer](https://ask.fedoraproject.org/en/question/56107/can-gpg-agent-be-used-when-signing-rpm-packages/?answer=57111#post-id-57111) the actual gpg sign command is explicitly defined, so as suggested by @nik9000 then I looked at a way to skip the gpg-agent. Unfortunately with gpg2 this is not possible, as `man gpg2` says:\n\n```\n --use-agent\n\n --no-use-agent\n This is dummy option. gpg2 always requires the agent.\n```\n\nStill looking at other hacks we could put in the definition of `__gpg_sign_cmd`\n", "created_at": "2016-03-11T12:03:11Z" }, { "body": "I think this should be closed since #17070 and #17073 were integrated? I think that this was not closed automatically because they were not merged into master, but instead into 2.x. Please reopen if I'm closing this in error.\n", "created_at": "2016-03-12T13:20:37Z" }, { "body": "@jasontedor You were right; this was just an oversight. Thanks for closing.\n", "created_at": "2016-03-14T07:15:50Z" } ], "number": 17053, "title": "rpm signing fails with gpg-agent 2.1.9" }
{ "body": "With this commit we symlink to the tmp directory in order to avoid\na too long socket path for GPG agent when signing an RPM package.\n\nCloses #17053\n", "number": 17070, "review_comments": [], "title": "Shorten GPG keypath by symlinking to tmp" }
{ "commits": [ { "message": "Shorten GPG keypath by symlinking to tmp\n\nWith this commit we symlink to the tmp directory in order to avoid\na too long socket path for GPG agent when signing an RPM package.\n\nCloses #17053" } ], "files": [ { "diff": "@@ -48,7 +48,21 @@\n <!-- By default we sign RPMs and DEBs using a dummy gpg directory that we check into\n source control. Releases will override this with the real information. -->\n <gpg.key>16E55242</gpg.key>\n- <gpg.keypath>${project.parent.basedir}/src/test/resources/dummyGpg</gpg.keypath>\n+ <!-- We're not using project.parent.basedir here by intention because it causes trouble\n+ (i.e. the property does not get resolved; verified with\n+ mvn help:effective-pom -pl distribution/rpm -Prpm). -->\n+ <gpg.long.keypath>${project.basedir}/../src/test/resources/dummyGpg</gpg.long.keypath>\n+ <!-- This is the path that is used internally for signing. gpg-agent allows this path to\n+ be at most 108 characters on Linux (see struct sockaddr_un in <sys/un.h>) and even less\n+ on other systems. By symlinking to the tmp directory, we reduce the path name length.\n+\n+ We use an internal property \"gpg.default.keypath\" which is not intended to be overridden\n+ on the command line. Instead, when signing a package in the release process,\n+ \"gpg.keypath\" should be provided (as it has been before). The build script will detect\n+ this and will not use symlinking magic in this case.\n+ -->\n+ <gpg.default.keypath>${java.io.tmpdir}/shortGpg</gpg.default.keypath>\n+ <gpg.keypath>${gpg.default.keypath}</gpg.keypath>\n <gpg.keyring>${gpg.keypath}/secring.gpg</gpg.keyring>\n <gpg.passphrase>dummy</gpg.passphrase>\n <deb.sign>true</deb.sign>", "filename": "distribution/pom.xml", "status": "modified" }, { "diff": "@@ -0,0 +1,16 @@\n+<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+<project name=\"correct-sign-path\">\n+ <target name=\"check-keypath-override\">\n+ <condition property=\"gpg.keypath.overridden\" value=\"true\" else=\"false\">\n+ <not>\n+ <equals arg1=\"${gpg.keypath}\" arg2=\"${gpg.default.keypath}\"/>\n+ </not>\n+ </condition>\n+ </target>\n+\n+ <!--in case the gpg.keypath has been overwritten externally we don't do any symlink magic -->\n+ <target name=\"shorten-gpg-path\" depends=\"check-keypath-override\" unless=\"${gpg.keypath.overridden}\">\n+ <echo level=\"info\" message=\"Symlinking ${gpg.long.keypath} to ${gpg.keypath}\"/>\n+ <symlink link=\"${gpg.keypath}\" resource=\"${gpg.long.keypath}\" overwrite=\"true\"/>\n+ </target>\n+</project>", "filename": "distribution/rpm/correct-sign-path.xml", "status": "added" }, { "diff": "@@ -356,6 +356,18 @@\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-antrun-plugin</artifactId>\n <executions>\n+ <execution>\n+ <id>shorten-gpg-key-path</id>\n+ <phase>prepare-package</phase>\n+ <goals>\n+ <goal>run</goal>\n+ </goals>\n+ <configuration>\n+ <target>\n+ <ant antfile=\"correct-sign-path.xml\" target=\"shorten-gpg-path\"/>\n+ </target>\n+ </configuration>\n+ </execution>\n <!-- start up external cluster -->\n <execution>\n <id>integ-setup</id>", "filename": "distribution/rpm/pom.xml", "status": "modified" } ] }
{ "body": "This ticket is meant to capture an issue which was discovered as part of the work done in #7493 , which contains a [failing reproduction test](https://github.com/elasticsearch/elasticsearch/blob/596a4a073584c4262d574828c9caea35b5ed1de5/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptions.java#L375) with @awaitFix.\n\nIf a network partition separates a node from the master, there is some window of time before the node detects it. The length of the window is dependent on the type of the partition. This window is extremely small if a socket is broken. More adversarial partitions, for example, silently dropping requests without breaking the socket can take longer (up to 3x30s using current defaults).\n\nIf the node hosts a _primary_ shard at the moment of partition, and ends up being isolated from the cluster (which could have resulted in Split Brain before), some documents that are being indexed into the primary _may_ be lost if they fail to reach one of the allocated replicas (due to the partition) and that replica is later promoted to primary by the master.\n", "comments": [ { "body": "I am curious to learn what your current thinking on fixing the issue is. I believe so long as we are ensuring the write is acknowledged by `WriteConsistencyLevel.QUORUM` or `WriteConsistencyLevel.ALL`, the problem should not theoretically happen. This seems to be what `TransportShardReplicationOperationAction` is aiming at, but may be buggy?\n\nAs an aside, can you point me at the primary-selection logic used by Elasticsearch?\n", "created_at": "2014-10-21T06:54:44Z" }, { "body": "@shikhar the write consistency check works at the moment based of the cluster state of the node that hosts the primary. That means that it can take some time (again, when the network is just dropping requests, socket disconnects are quick) before the master detects a node does not respond to pings and removes it from the cluster states (or that a node detects it's not connected to a master). The first step is improving transparency w.r.t replica shards indexing errors (see #7994). That will help expose when a document was not successfully indexed to all replicas. After that we plan to continue with improving primary shard promotion. Current code is here: https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java#L271\n", "created_at": "2014-10-21T08:15:04Z" }, { "body": "Ah I see, my thinking was that the WCL check be verified _both_ before and after the write has been sent. The after is what really matters. So it seems you are suggesting that the responsibility of verifying how many replicas a write was acknowledged by, will be borne by the requestor? I think the terminology around \"write consistency level\" check may have to be re-considered then!\n\nFrom the primary selection logic I can't spot anywhere where it's trying to pick the most \"recent\" replica of the candidates. Does ES currently exercise any such preference?\n", "created_at": "2014-10-21T19:26:25Z" }, { "body": "> So it seems you are suggesting that the responsibility of verifying how many replicas a write was acknowledged by, will be borne by the requestor? \n\nThe PR I mentioned is just a first step to bring more transparency into the process, by no means the goal. \n\n> From the primary selection logic I can't spot anywhere where it's trying to pick the most \"recent\" replica of the candidates. Does ES currently exercise any such preference?\n\n\"recent\" is very tricky when you index concurrently different documents of different sizes on different nodes. Depending on how things run, there is no notion of a clear \"recent\" shard as each replica may be behind on different documents, all in flight. I currently have some thoughts on how to approach this better but it's early stages. One of the options is take make a intermediate step which will indeed involve some heuristic around \"recency\".\n", "created_at": "2014-10-21T19:35:03Z" }, { "body": "> \"recent\" is very tricky when you index concurrently different documents of different sizes on different nodes. Depending on how things run, there is no notion of a clear \"recent\" shard as each replica may be behind on different documents, all in flight. I currently have some thoughts on how to approach this better but it's early stages. One of the options is take make a intermediate step which will indeed involve some heuristic around \"recency\".\n\nAgreed that it's tricky. \n\nIt seems to me that what's required is a shard-specific monotonic counter, and since all writes go through the primary this can be safely implemented. Is this blocking on the \"sequence ID\" stuff I think I saw some talk of? Is there a ticket for that?\n", "created_at": "2014-10-21T20:16:10Z" }, { "body": "> It seems to me that what's required is a shard-specific monotonic counter, and since all writes go through the primary this can be safely implemented. Is this blocking on the \"sequence ID\" stuff I think I saw some talk of? \n\nYou read our minds :)\n", "created_at": "2014-10-21T20:27:14Z" }, { "body": "[recommendation](https://twitter.com/aphyr/status/524599768526233601) from @aphyr for this problem: viewstamped replication\n", "created_at": "2014-10-21T20:28:30Z" }, { "body": "Or Paxos, or ZAB, or Raft, or ...\n", "created_at": "2014-10-21T20:39:54Z" }, { "body": "Chiming with a related note that I mentioned on the mailing list (@shikhar linked me here) re: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/elasticsearch/M17mgdZnikk/Vk5lVIRjIFAJ. This is failure mode that can happen without a network partition... just crashing nodes (which you can easily get with some long GC pauses) \n\n## \n\nI think the monotonic counters are a good solution to this, but only if they count something that indicates not only state (The next document inserted to the shard should be document 1000), but also size (which implies that I have 999 documents in my copy of the shard). This way, if you end up in a position where a partially-replicated shard is promoted to master (because it has the only copy of the shard remaining in the cluster), you can now offer the user some interesting cluster configuration options: \n\n1) serve the data I have, but accept no writes/updates (until a `full` shard returns to the cluster)\n2) temporarily close the index / 500 error (until a `full` shard returns to the cluster)\n3) promote what I have to master (and re-replicate my copy to other nodes when they re-join the cluster)\n\nWithout _knowing_ that a shard is in this \"partial-data\" state, you couldn't make the choice. I would personally choose #1 most of the time, but I can see use cases for all three options. I would argue that #3 is what is happening presently. While this would add overhead to each write/update (you would need to count the number of documents in the shard EACH write), I think that allowing ES to run in this \"more safe\" mode is a good option. Hopefully the suggestion isn't too crazy, as this would only add a check on the local copy of the data, and we probably only need to do it on the master shard. \n", "created_at": "2014-10-24T06:50:14Z" }, { "body": "> 3) promote what I have to master (and re-replicate my copy to other nodes when they re-join the cluster)\n\nThere's some [great literature that addresses this problem](http://web.stanford.edu/class/cs347/reading/zab.pdf).\n", "created_at": "2014-10-24T20:47:32Z" }, { "body": "@evantahler \n\n> This way, if you end up in a position where a partially-replicated shard is promoted to master (because it has the only copy of the shard remaining in the cluster)\n\nThis should never happen. ES prefers to go to red state and block indexing to promoting half copies to primaries. If it did it is a major bug and I would request you open another issue about it (this one is about something else). \n", "created_at": "2014-10-24T21:27:40Z" }, { "body": "linking #10708 \n", "created_at": "2015-05-05T17:57:30Z" }, { "body": "Since this issue is related to in-flight documents. Do you think there is a risk to loose existing document during primary shard relocation (cluster rebalancing after adding a new node for instance )?\n", "created_at": "2015-07-13T09:45:59Z" }, { "body": "@JeanFrancoisContour this issue relates to documents that are wrongfully acked. I.e., ES acknowledge them but they didn't really reach all the replicas. They are lost when the primary is removed in favour of one of the other replica due to a network partition that isolates the primary. It should effect primary relocation. If you have issues there do please report by opening a different ticket.\n", "created_at": "2015-07-16T09:34:06Z" }, { "body": "Ok thanks, so if we can afford to send data twice (same _id), in real time for the first event and a few hour later (bulk) for the second try, we are pretty confident in ES overall ?\n", "created_at": "2015-07-16T17:05:54Z" }, { "body": "For the record, the majority of the work to fix this can be found at #14252\n", "created_at": "2016-04-07T07:55:47Z" } ], "number": 7572, "title": "[Indexing] A network partition can cause in flight documents to be lost" }
{ "body": "Recent work on the distributed systems front brings us to a point where\nwe are very close to being able to enable the acked indexing test. We\nhave chased down one additional failure that will require the primary\nterms work to address, but let's get this preparatory work in.\n\nCloses #7572\n", "number": 17038, "review_comments": [ { "body": "This feels way more natural on the IndexShard level - I felt that way when I made it but couldn't find a way (at the time) to push it to IndexShard while getting the debugging information to be used as the string value here. We can pass it to IndexShard.acquirePrimaryOperationLock and acquireReplicaOperationLock but that will force a toString on the request object. How about making the reason an Object type and only call toString on it in IndexShard? \n", "created_at": "2016-03-10T13:02:11Z" }, { "body": "or even better a reason supplier called on demand?\n", "created_at": "2016-03-10T13:02:40Z" }, { "body": "can we open a follow up issue that failShard should throw an already closed exception? (this is really what the code protects against). To be clear - I think the code can stay.\n", "created_at": "2016-03-10T13:03:59Z" }, { "body": "add a comment about where these exceptions can come from?\n", "created_at": "2016-03-10T13:04:28Z" }, { "body": "add a comment why we do it out of loop? (i.e. loop again)\n", "created_at": "2016-03-10T13:05:43Z" }, { "body": "can we add that it came from the current master? Also - thinking about this more - this can only happen in our testing right? In practice we only have 1 channel (if someone doesn't mess with some settings). Maybe add a comment about it if so...\n", "created_at": "2016-03-10T13:12:10Z" }, { "body": "haha. I fixed this inline - forgot :) I think this should be a separate PR - it's not really needed for this one (we protect for it).\n", "created_at": "2016-03-10T16:14:03Z" }, { "body": "I think it will be good to explain here that unresponsive disruption swallows request by design, meaning that some operations will never be marked as finished.\n", "created_at": "2016-03-10T16:15:25Z" }, { "body": "I wonder if should rarely (and if nightly runs are enable) run with a longer timeout, say 5s...\n", "created_at": "2016-03-10T16:18:05Z" }, { "body": "nit: I think this is cleaner\n\n```\n assertTrue(\"doc [\"+id+\"] should have been created\", response.isCreated())\n```\n", "created_at": "2016-03-10T16:24:19Z" }, { "body": "hmm... I'm not sure about this. It means we are not indexing when the partition is healed... not sure why I changed it. I think it was to make sure that we have some indexing during disruption but looking at it now I think we should split into rounds - indexing while disrupting , stop disrupt and index some more, then wait for the cluster to heal. WDYT?\n", "created_at": "2016-03-10T16:27:52Z" }, { "body": "I hate IntelliJ's auto trimming. I have no idea who came up with that logic..\n", "created_at": "2016-03-10T16:29:06Z" }, { "body": "why do we need the subsequence? the original list is already ordered - we can just use it? also note that strictly speaking we do support out of order publishing (until we remove the settings that allows it, one day.. :)).\n", "created_at": "2016-03-10T16:33:41Z" }, { "body": "I pushed 14ba0c31b4a90595efdfa64f3de23904a354f1dc.\n", "created_at": "2016-03-28T16:45:41Z" }, { "body": "I pushed 37d739a3cdde49e10dc30c6c94eb90447656fd42.\n", "created_at": "2016-03-28T16:48:04Z" }, { "body": "Down with unrelated formatting changes! (I pushed 97be38353a936ebe245f4fe7f1d311c82f784b11.) \n", "created_at": "2016-03-28T16:51:16Z" }, { "body": "Pushed 4e1f62eae93a8e1d447c1e7884ef2fb8fa037476.\n", "created_at": "2016-03-28T17:20:17Z" }, { "body": "I pushed 2a9388912bc7e73381eb8b4d39fe03462abc7cb9.\n", "created_at": "2016-03-28T17:35:07Z" }, { "body": "I agree that it's more natural there, but I prefer to keep it where it is until there's a clear need for it at the lower level.\n", "created_at": "2016-03-28T17:37:02Z" }, { "body": "I opened #17366.\n", "created_at": "2016-03-28T17:41:33Z" }, { "body": "Reverted in 5576526c9182877184b2d347ca0eece875e0804f.\n", "created_at": "2016-03-28T17:45:43Z" }, { "body": "I pushed 4de57fc5aac31c3a907529a1b932c934cdf611f6.\n", "created_at": "2016-03-28T17:50:46Z" }, { "body": "Pushed 85d3d51a7490c670a93c31967f46c339156881c6.\n", "created_at": "2016-03-28T17:55:20Z" }, { "body": "guys we can't have static maps on classes like this. If we want some kind of static assertions we should inject some mock services just as we do for for `SearchContext` in `MockSearchService`. I am a bit confused why we have a static factory method on literally the only impl of an interface in `IndexShardReferenceImpl` this smells like a design flaw to me. I think with this change together we should rather expose a pluggalbe service or move the creation into `IndexService` and let folks plug in a factory into `IndexModule` in order to impl this assertion? but static per JVM concurrent maps seems like a playing russian roulette \n", "created_at": "2016-03-29T13:07:14Z" }, { "body": "Removed in 0e5b22a648064e5257d03e5fd91293e7500de27f.\n", "created_at": "2016-03-29T15:15:09Z" }, { "body": "nit: this was lost, but I think is a good thing to have.\n", "created_at": "2016-03-30T10:28:55Z" }, { "body": "opinions?\n", "created_at": "2016-03-30T10:33:30Z" }, { "body": "I agree but would prefer that we do it in a follow up. That is, let's get the test running and stabilized (if necessary) and then enhance it. Are you okay with that @bleskes?\n", "created_at": "2016-04-02T17:34:55Z" }, { "body": "OK!\n", "created_at": "2016-04-02T17:38:54Z" }, { "body": "I pushed 8b970d970ddfb32c27d51f436c4d6f7b35258540.\n", "created_at": "2016-04-02T17:42:06Z" } ], "title": "Enable acked indexing" }
{ "commits": [ { "message": "enable testAckedIndexing" }, { "message": "protect against transport shutdowns" }, { "message": "make index counter checking use assertBusy" }, { "message": "better open reference reporting" }, { "message": "fixed closing shard reference on shard already closed" }, { "message": "failure message with shard routing + protect against already closed" }, { "message": "better cluster stability check" }, { "message": "close timeoutlisteners with no timeout..." }, { "message": "increased timeout" }, { "message": "disabled in flight ops check" }, { "message": "better proteciton against throwables with no message in assertions" }, { "message": "better assertion" }, { "message": "don't through fail to send exception on a the generic thread - it may be shut down already and will an exception" }, { "message": "more shutdown exceptions" }, { "message": "Merge branch 'master' into enable_acked\n\n* master: (350 commits)\n Note to configuration docs on number of threads\n Reduce maximum number of threads in boostrap check\n Limit generic thread pool\n Remove NodeService injection to Discovery\n Prevent closing index during snapshot restore\n [TEST] Fix newline issue in PluginCliTests on Windows\n ParseFieldMatcher should log when using deprecated settings. #16988\n fix checkstyle error\n Add test for the index_options on a keyword field. #16990\n Analysis : Allow string explain param in JSON\n Analysis : Allow string explain param in JSON\n fix typo\n Remove SNAPSHOT from versions in plugin descriptors\n Add support for alpha versions\n Enable unmap hack for java 9\n Simplify mock scripts\n Adding `time_zone` parameter to daterange-aggregation docs\n Adding tests for `time_zone` parameter for date range aggregation\n Added ingest info to node info API, which contains a list of available processors.\n Remove bw compat from size mapper\n ..." }, { "message": "No old states polluting pending states queue\n\nThis commit adds a guard against an old cluster state that arrives out\nof order from the last seen cluster state from the current master from\npolluting the pending cluster states queue. Without this guard, such a\nstate can end up stuck in the pending states queue." }, { "message": "Waiting for primary terms" }, { "message": "Merge branch 'master' into enable_acked\n\n* master: (419 commits)\n Remove PROTOTYPE from ShapeBuilders\n Take filterNodeIds into consideration while sending tasks actions requests to nodes\n test: cleanup imports and method rename\n Remove PROTOTYPE from SortBuilders\n percolator: Add query extract support for the blended term query and the common terms query.\n Don't iterate over shard routing if it's null\n [TEST] Reduce size of random shapes\n Add some debug logging to testPrimaryRelocationWhileIndexing\n Order methods in IndicesClusterStateService according to execution\n Tidied up percolator doc annotations\n In cat.snapshots, repository is required\n Do not retrieve all indices stats when checking for cache resets\n Enforce `discovery.zen.minimum_master_nodes` is set when bound to a public ip #17288\n Port Primary Terms to master #17044\n Revert \"Add debug logging for Vagrant upgrade test\"\n Ownership for data, logs, and configs for packages\n add on_failure exception metadata to ingest document for verbose simulate\n Revert \"Merge pull request #16843 from xuzha/s3-encryption\"\n Update Format, add new settings into the setting test\n Update and rebase the init implementation.\n ..." }, { "message": "Use longer timeout on nightly tests, but rarely\n\nThis commit increases the timeout while indexing during the acked\nindexing test when running nightly tests, but only rarely." }, { "message": "Simplify doc creation check in acked indexing test\n\nThis commit simplifies the doc creation check while indexing in the\nacked indexing test." }, { "message": "Fix formatting in DWSDIT#TCJDOPI" }, { "message": "Clarify exceptions when failing to fail a replica" }, { "message": "Add clarifying comment on disrupted in-flight ops" }, { "message": "For now do not guard against already failed engine" }, { "message": "Simplify test out of order commit messages" }, { "message": "Clarify message on out-of-order state publish" }, { "message": "Remove pending locks assertions from TRA" }, { "message": "Merge branch 'master' into enable_acked\n\n* master: (25 commits)\n Replication operation that try to perform the primary phase on a replica should be retried\n split long line in ConvertProcessorTests\n add type conversion support to ConvertProcessor\n percolator: Make explain use the two phase iterator\n test: make sure we don't flush during indexing the percolator queries\n Added experimental annotation to the update-by-query and reindex docs\n Fixed bad YAML in reindex REST test: 50_routing.yaml\n Update-by-query rest tests: fixed bad yaml and deleted a client-dependent test\n Prevents exception being raised when ordering by an aggregation which wasn't collected\n The reindex body is now required, which changes the exception thrown by the REST test\n Docs: Included Nodes Task API and tidied reindex/update-by-query\n Rename update-by-query REST tests to update_by_query\n REST: The body is required in the reindex API\n The source parameter should not be defined in the delete-by-query REST spec\n Renamed update-by-query REST spec to update_by_query\n Fix test bug in TypeQueryBuilderTests.\n Add comment why it is safe to check the number of nested fields in MapperService.merge.\n Automatically add a sub keyword field to string dynamic mappings. #17188\n Type filters should not have a performance impact when there is a single type. #17350\n Add API to explain why a shard is or isn't assigned\n ..." }, { "message": "Enable acked indexing test" }, { "message": "Adjust for long random timeout in acked indexing" } ], "files": [ { "diff": "@@ -79,6 +79,7 @@\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.concurrent.ConcurrentHashMap;\n import java.util.concurrent.ConcurrentMap;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -679,7 +680,7 @@ protected void doRun() throws Exception {\n return;\n }\n // closed in finishAsFailed(e) in the case of error\n- indexShardReference = getIndexShardReferenceOnPrimary(shardId);\n+ indexShardReference = getIndexShardReferenceOnPrimary(shardId, request);\n if (indexShardReference.isRelocated() == false) {\n executeLocally();\n } else {\n@@ -797,7 +798,7 @@ void finishBecauseUnavailable(ShardId shardId, String message) {\n * returns a new reference to {@link IndexShard} to perform a primary operation. Released after performing primary operation locally\n * and replication of the operation to all replica shards is completed / failed (see {@link ReplicationPhase}).\n */\n- protected IndexShardReference getIndexShardReferenceOnPrimary(ShardId shardId) {\n+ protected IndexShardReference getIndexShardReferenceOnPrimary(ShardId shardId, Request request) {\n IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex());\n IndexShard indexShard = indexService.getShard(shardId.id());\n // we may end up here if the cluster state used to route the primary is so stale that the underlying\n@@ -816,7 +817,8 @@ protected IndexShardReference getIndexShardReferenceOnPrimary(ShardId shardId) {\n protected IndexShardReference getIndexShardReferenceOnReplica(ShardId shardId, long primaryTerm) {\n IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex());\n IndexShard indexShard = indexService.getShard(shardId.id());\n- return IndexShardReferenceImpl.createOnReplica(indexShard, primaryTerm);\n+ IndexShardReference ref = IndexShardReferenceImpl.createOnReplica(indexShard, primaryTerm);\n+ return ref;\n }\n \n /**\n@@ -997,30 +999,38 @@ public void handleException(TransportException exp) {\n String message = String.format(Locale.ROOT, \"failed to perform %s on replica on node %s\", transportReplicaAction, node);\n logger.warn(\"[{}] {}\", exp, shardId, message);\n shardStateAction.shardFailed(\n- shard,\n- indexShardReference.routingEntry(),\n- message,\n- exp,\n- new ShardStateAction.Listener() {\n- @Override\n- public void onSuccess() {\n- onReplicaFailure(nodeId, exp);\n- }\n-\n- @Override\n- public void onFailure(Throwable shardFailedError) {\n- if (shardFailedError instanceof ShardStateAction.NoLongerPrimaryShardException) {\n- ShardRouting primaryShard = indexShardReference.routingEntry();\n- String message = String.format(Locale.ROOT, \"primary shard [%s] was demoted while failing replica shard [%s] for [%s]\", primaryShard, shard, exp);\n- // we are no longer the primary, fail ourselves and start over\n- indexShardReference.failShard(message, shardFailedError);\n- forceFinishAsFailed(new RetryOnPrimaryException(shardId, message, shardFailedError));\n- } else {\n- assert false : shardFailedError;\n+ shard,\n+ indexShardReference.routingEntry(),\n+ message,\n+ exp,\n+ new ShardStateAction.Listener() {\n+ @Override\n+ public void onSuccess() {\n onReplicaFailure(nodeId, exp);\n }\n+\n+ @Override\n+ public void onFailure(Throwable shardFailedError) {\n+ if (shardFailedError instanceof ShardStateAction.NoLongerPrimaryShardException) {\n+ String message = \"unknown\";\n+ try {\n+ ShardRouting primaryShard = indexShardReference.routingEntry();\n+ message = String.format(Locale.ROOT, \"primary shard [%s] was demoted while failing replica shard [%s] for [%s]\", primaryShard, shard, exp);\n+ // we are no longer the primary, fail ourselves and start over\n+ indexShardReference.failShard(message, shardFailedError);\n+ } catch (Throwable t) {\n+ shardFailedError.addSuppressed(t);\n+ }\n+ forceFinishAsFailed(new RetryOnPrimaryException(shardId, message, shardFailedError));\n+ } else {\n+ // these can occur if the node is shutting down and are okay\n+ // any other exception here is not expected and merits investigation\n+ assert shardFailedError instanceof TransportException ||\n+ shardFailedError instanceof NodeClosedException : shardFailedError;\n+ onReplicaFailure(nodeId, exp);\n+ }\n+ }\n }\n- }\n );\n }\n }\n@@ -1108,7 +1118,9 @@ protected boolean shouldExecuteReplication(Settings settings) {\n \n interface IndexShardReference extends Releasable {\n boolean isRelocated();\n+\n void failShard(String reason, @Nullable Throwable e);\n+\n ShardRouting routingEntry();\n \n /** returns the primary term of the current operation */", "filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java", "status": "modified" }, { "diff": "@@ -53,6 +53,7 @@\n import org.elasticsearch.transport.ConnectTransportException;\n import org.elasticsearch.transport.EmptyTransportResponseHandler;\n import org.elasticsearch.transport.NodeDisconnectedException;\n+import org.elasticsearch.transport.RemoteTransportException;\n import org.elasticsearch.transport.TransportChannel;\n import org.elasticsearch.transport.TransportException;\n import org.elasticsearch.transport.TransportRequest;\n@@ -111,7 +112,7 @@ public void handleException(TransportException exp) {\n waitForNewMasterAndRetry(actionName, observer, shardRoutingEntry, listener);\n } else {\n logger.warn(\"{} unexpected failure while sending request [{}] to [{}] for shard [{}]\", exp, shardRoutingEntry.getShardRouting().shardId(), actionName, masterNode, shardRoutingEntry);\n- listener.onFailure(exp.getCause());\n+ listener.onFailure(exp instanceof RemoteTransportException ? exp.getCause() : exp);\n }\n }\n });", "filename": "core/src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java", "status": "modified" }, { "diff": "@@ -210,6 +210,7 @@ synchronized protected void doStart() {\n @Override\n synchronized protected void doStop() {\n for (NotifyTimeout onGoingTimeout : onGoingTimeouts) {\n+ onGoingTimeout.cancel();\n try {\n onGoingTimeout.cancel();\n onGoingTimeout.listener.onClose();\n@@ -218,6 +219,12 @@ synchronized protected void doStop() {\n }\n }\n ThreadPool.terminate(updateTasksExecutor, 10, TimeUnit.SECONDS);\n+ // close timeout listeners that did not have an ongoing timeout\n+ postAppliedListeners\n+ .stream()\n+ .filter(listener -> listener instanceof TimeoutClusterStateListener)\n+ .map(listener -> (TimeoutClusterStateListener)listener)\n+ .forEach(TimeoutClusterStateListener::onClose);\n remove(localNodeMasterListeners);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/service/ClusterService.java", "status": "modified" }, { "diff": "@@ -36,6 +36,7 @@\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.component.Lifecycle;\n import org.elasticsearch.common.inject.Inject;\n@@ -188,7 +189,14 @@ public ZenDiscovery(Settings settings, ClusterName clusterName, ThreadPool threa\n this.nodesFD = new NodesFaultDetection(settings, threadPool, transportService, clusterName);\n this.nodesFD.addListener(new NodeFaultDetectionListener());\n \n- this.publishClusterState = new PublishClusterStateAction(settings, transportService, this, new NewPendingClusterStateListener(), discoverySettings, clusterName);\n+ this.publishClusterState =\n+ new PublishClusterStateAction(\n+ settings,\n+ transportService,\n+ clusterService::state,\n+ new NewPendingClusterStateListener(),\n+ discoverySettings,\n+ clusterName);\n this.pingService.setPingContextProvider(this);\n this.membership = new MembershipAction(settings, clusterService, transportService, this, new MembershipListener());\n \n@@ -766,15 +774,24 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n * If the first condition fails we reject the cluster state and throw an error.\n * If the second condition fails we ignore the cluster state.\n */\n- static boolean shouldIgnoreOrRejectNewClusterState(ESLogger logger, ClusterState currentState, ClusterState newClusterState) {\n+ @SuppressForbidden(reason = \"debug\")\n+ public static boolean shouldIgnoreOrRejectNewClusterState(ESLogger logger, ClusterState currentState, ClusterState newClusterState) {\n validateStateIsFromCurrentMaster(logger, currentState.nodes(), newClusterState);\n- if (currentState.supersedes(newClusterState)) {\n+\n+ // reject cluster states that are not new from the same master\n+ if (currentState.supersedes(newClusterState) ||\n+ (newClusterState.nodes().getMasterNodeId().equals(currentState.nodes().getMasterNodeId()) && currentState.version() == newClusterState.version())) {\n // if the new state has a smaller version, and it has the same master node, then no need to process it\n+ logger.debug(\"received a cluster state that is not newer than the current one, ignoring (received {}, current {})\", newClusterState.version(), currentState.version());\n+ return true;\n+ }\n+\n+ // reject older cluster states if we are following a master\n+ if (currentState.nodes().getMasterNodeId() != null && newClusterState.version() < currentState.version()) {\n logger.debug(\"received a cluster state that has a lower version than the current one, ignoring (received {}, current {})\", newClusterState.version(), currentState.version());\n return true;\n- } else {\n- return false;\n }\n+ return false;\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java", "status": "modified" }, { "diff": "@@ -164,16 +164,18 @@ public synchronized void markAsProcessed(ClusterState state) {\n currentMaster\n );\n }\n- } else if (state.supersedes(pendingState) && pendingContext.committed()) {\n- logger.trace(\"processing pending state uuid[{}]/v[{}] together with state uuid[{}]/v[{}]\",\n- pendingState.stateUUID(), pendingState.version(), state.stateUUID(), state.version()\n- );\n- contextsToRemove.add(pendingContext);\n- pendingContext.listener.onNewClusterStateProcessed();\n } else if (pendingState.stateUUID().equals(state.stateUUID())) {\n assert pendingContext.committed() : \"processed cluster state is not committed \" + state;\n contextsToRemove.add(pendingContext);\n pendingContext.listener.onNewClusterStateProcessed();\n+ } else if (state.version() >= pendingState.version()) {\n+ logger.trace(\"processing pending state uuid[{}]/v[{}] together with state uuid[{}]/v[{}]\",\n+ pendingState.stateUUID(), pendingState.version(), state.stateUUID(), state.version()\n+ );\n+ contextsToRemove.add(pendingContext);\n+ if (pendingContext.committed()) {\n+ pendingContext.listener.onNewClusterStateProcessed();\n+ }\n }\n }\n // now ack the processed state", "filename": "core/src/main/java/org/elasticsearch/discovery/zen/publish/PendingClusterStatesQueue.java", "status": "modified" }, { "diff": "@@ -41,7 +41,6 @@\n import org.elasticsearch.discovery.BlockingClusterStatePublishResponseHandler;\n import org.elasticsearch.discovery.Discovery;\n import org.elasticsearch.discovery.DiscoverySettings;\n-import org.elasticsearch.discovery.zen.DiscoveryNodesProvider;\n import org.elasticsearch.discovery.zen.ZenDiscovery;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.BytesTransportRequest;\n@@ -58,11 +57,13 @@\n import java.util.ArrayList;\n import java.util.HashMap;\n import java.util.HashSet;\n+import java.util.Locale;\n import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.function.Supplier;\n \n /**\n *\n@@ -81,17 +82,22 @@ public interface NewPendingClusterStateListener {\n }\n \n private final TransportService transportService;\n- private final DiscoveryNodesProvider nodesProvider;\n+ private final Supplier<ClusterState> clusterStateSupplier;\n private final NewPendingClusterStateListener newPendingClusterStatelistener;\n private final DiscoverySettings discoverySettings;\n private final ClusterName clusterName;\n private final PendingClusterStatesQueue pendingStatesQueue;\n \n- public PublishClusterStateAction(Settings settings, TransportService transportService, DiscoveryNodesProvider nodesProvider,\n- NewPendingClusterStateListener listener, DiscoverySettings discoverySettings, ClusterName clusterName) {\n+ public PublishClusterStateAction(\n+ Settings settings,\n+ TransportService transportService,\n+ Supplier<ClusterState> clusterStateSupplier,\n+ NewPendingClusterStateListener listener,\n+ DiscoverySettings discoverySettings,\n+ ClusterName clusterName) {\n super(settings);\n this.transportService = transportService;\n- this.nodesProvider = nodesProvider;\n+ this.clusterStateSupplier = clusterStateSupplier;\n this.newPendingClusterStatelistener = listener;\n this.discoverySettings = discoverySettings;\n this.clusterName = clusterName;\n@@ -363,7 +369,7 @@ protected void handleIncomingClusterStateRequest(BytesTransportRequest request,\n final ClusterState incomingState;\n // If true we received full cluster state - otherwise diffs\n if (in.readBoolean()) {\n- incomingState = ClusterState.Builder.readFrom(in, nodesProvider.nodes().getLocalNode());\n+ incomingState = ClusterState.Builder.readFrom(in, clusterStateSupplier.get().nodes().getLocalNode());\n logger.debug(\"received full cluster state version [{}] with size [{}]\", incomingState.version(), request.bytes().length());\n } else if (lastSeenClusterState != null) {\n Diff<ClusterState> diff = lastSeenClusterState.readDiffFrom(in);\n@@ -394,14 +400,25 @@ void validateIncomingState(ClusterState incomingState, ClusterState lastSeenClus\n logger.warn(\"received cluster state from [{}] which is also master but with a different cluster name [{}]\", incomingState.nodes().getMasterNode(), incomingClusterName);\n throw new IllegalStateException(\"received state from a node that is not part of the cluster\");\n }\n- final DiscoveryNodes currentNodes = nodesProvider.nodes();\n+ final ClusterState clusterState = clusterStateSupplier.get();\n \n- if (currentNodes.getLocalNode().equals(incomingState.nodes().getLocalNode()) == false) {\n+ if (clusterState.nodes().getLocalNode().equals(incomingState.nodes().getLocalNode()) == false) {\n logger.warn(\"received a cluster state from [{}] and not part of the cluster, should not happen\", incomingState.nodes().getMasterNode());\n- throw new IllegalStateException(\"received state from a node that is not part of the cluster\");\n+ throw new IllegalStateException(\"received state with a local node that does not match the current local node\");\n+ }\n+\n+ if (ZenDiscovery.shouldIgnoreOrRejectNewClusterState(logger, clusterState, incomingState)) {\n+ String message = String.format(\n+ Locale.ROOT,\n+ \"rejecting cluster state version [%d] uuid [%s] received from [%s]\",\n+ incomingState.version(),\n+ incomingState.stateUUID(),\n+ incomingState.nodes().getMasterNodeId()\n+ );\n+ logger.warn(message);\n+ throw new IllegalStateException(message);\n }\n \n- ZenDiscovery.validateStateIsFromCurrentMaster(logger, currentNodes, incomingState);\n }\n \n protected void handleCommitRequest(CommitClusterStateRequest request, final TransportChannel channel) {\n@@ -518,7 +535,7 @@ public void waitForCommit(TimeValue commitTimeout) {\n }\n \n if (timedout) {\n- markAsFailed(\"timed out waiting for commit (commit timeout [\" + commitTimeout + \"]\");\n+ markAsFailed(\"timed out waiting for commit (commit timeout [\" + commitTimeout + \"])\");\n }\n if (isCommitted() == false) {\n throw new Discovery.FailedToCommitClusterStateException(\"{} enough masters to ack sent cluster state. [{}] left\",", "filename": "core/src/main/java/org/elasticsearch/discovery/zen/publish/PublishClusterStateAction.java", "status": "modified" }, { "diff": "@@ -1062,6 +1062,9 @@ public boolean isRelocated() {\n @Override\n public void failShard(String reason, @Nullable Throwable e) {\n isShardFailed.set(true);\n+ if (randomBoolean()) {\n+ throw new ElasticsearchException(\"simulated\");\n+ }\n }\n \n @Override\n@@ -1173,7 +1176,7 @@ protected boolean resolveIndex() {\n \n \n @Override\n- protected IndexShardReference getIndexShardReferenceOnPrimary(ShardId shardId) {\n+ protected IndexShardReference getIndexShardReferenceOnPrimary(ShardId shardId, Request request) {\n final IndexMetaData indexMetaData = clusterService.state().metaData().index(shardId.getIndex());\n return getOrCreateIndexShardOperationsCounter(indexMetaData.primaryTerm(shardId.id()));\n }", "filename": "core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java", "status": "modified" }, { "diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.discovery;\n \n import org.apache.lucene.index.CorruptIndexException;\n-import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n@@ -138,6 +137,13 @@ protected int numberOfReplicas() {\n return 1;\n }\n \n+ @Override\n+ protected void beforeIndexDeletion() {\n+ // some test may leave operations in flight\n+ // this is because the disruption schemes swallow requests by design\n+ // as such, these operations will never be marked as finished\n+ }\n+\n private List<String> startCluster(int numberOfNodes) throws ExecutionException, InterruptedException {\n return startCluster(numberOfNodes, -1);\n }\n@@ -146,7 +152,8 @@ private List<String> startCluster(int numberOfNodes, int minimumMasterNode) thro\n return startCluster(numberOfNodes, minimumMasterNode, null);\n }\n \n- private List<String> startCluster(int numberOfNodes, int minimumMasterNode, @Nullable int[] unicastHostsOrdinals) throws ExecutionException, InterruptedException {\n+ private List<String> startCluster(int numberOfNodes, int minimumMasterNode, @Nullable int[] unicastHostsOrdinals) throws\n+ ExecutionException, InterruptedException {\n configureUnicastCluster(numberOfNodes, unicastHostsOrdinals, minimumMasterNode);\n List<String> nodes = internalCluster().startNodesAsync(numberOfNodes).get();\n ensureStableCluster(numberOfNodes);\n@@ -175,11 +182,20 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n return pluginList(MockTransportService.TestPlugin.class);\n }\n \n- private void configureUnicastCluster(int numberOfNodes, @Nullable int[] unicastHostsOrdinals, int minimumMasterNode) throws ExecutionException, InterruptedException {\n+ private void configureUnicastCluster(\n+ int numberOfNodes,\n+ @Nullable int[] unicastHostsOrdinals,\n+ int minimumMasterNode\n+ ) throws ExecutionException, InterruptedException {\n configureUnicastCluster(DEFAULT_SETTINGS, numberOfNodes, unicastHostsOrdinals, minimumMasterNode);\n }\n \n- private void configureUnicastCluster(Settings settings, int numberOfNodes, @Nullable int[] unicastHostsOrdinals, int minimumMasterNode) throws ExecutionException, InterruptedException {\n+ private void configureUnicastCluster(\n+ Settings settings,\n+ int numberOfNodes,\n+ @Nullable int[] unicastHostsOrdinals,\n+ int minimumMasterNode\n+ ) throws ExecutionException, InterruptedException {\n if (minimumMasterNode < 0) {\n minimumMasterNode = numberOfNodes / 2 + 1;\n }\n@@ -253,7 +269,8 @@ public void testNodesFDAfterMasterReelection() throws Exception {\n \n logger.info(\"--> reducing min master nodes to 2\");\n assertAcked(client().admin().cluster().prepareUpdateSettings()\n- .setTransientSettings(Settings.builder().put(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey(), 2)).get());\n+ .setTransientSettings(Settings.builder().put(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey(), 2))\n+ .get());\n \n String master = internalCluster().getMasterName();\n String nonMaster = null;\n@@ -278,8 +295,8 @@ public void testVerifyApiBlocksDuringPartition() throws Exception {\n \n // Makes sure that the get request can be executed on each node locally:\n assertAcked(prepareCreate(\"test\").setSettings(Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 2)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 2)\n ));\n \n // Everything is stable now, it is now time to simulate evil...\n@@ -359,8 +376,8 @@ public void testIsolateMasterAndVerifyClusterStateConsensus() throws Exception {\n \n assertAcked(prepareCreate(\"test\")\n .setSettings(Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1 + randomInt(2))\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomInt(2))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1 + randomInt(2))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomInt(2))\n ));\n \n ensureGreen();\n@@ -380,7 +397,8 @@ public void testIsolateMasterAndVerifyClusterStateConsensus() throws Exception {\n networkPartition.stopDisrupting();\n \n for (String node : nodes) {\n- ensureStableCluster(3, new TimeValue(DISRUPTION_HEALING_OVERHEAD.millis() + networkPartition.expectedTimeToHeal().millis()), true, node);\n+ ensureStableCluster(3, new TimeValue(DISRUPTION_HEALING_OVERHEAD.millis() + networkPartition.expectedTimeToHeal().millis()),\n+ true, node);\n }\n \n logger.info(\"issue a reroute\");\n@@ -421,17 +439,20 @@ public void testIsolateMasterAndVerifyClusterStateConsensus() throws Exception {\n * <p>\n * This test is a superset of tests run in the Jepsen test suite, with the exception of versioned updates\n */\n- // NOTE: if you remove the awaitFix, make sure to port the test to the 1.x branch\n- @LuceneTestCase.AwaitsFix(bugUrl = \"needs some more work to stabilize\")\n- @TestLogging(\"_root:DEBUG,action.index:TRACE,action.get:TRACE,discovery:TRACE,cluster.service:TRACE,indices.recovery:TRACE,indices.cluster:TRACE\")\n+ @TestLogging(\"_root:DEBUG,action.index:TRACE,action.get:TRACE,discovery:TRACE,cluster.service:TRACE,\"\n+ + \"indices.recovery:TRACE,indices.cluster:TRACE\")\n public void testAckedIndexing() throws Exception {\n+\n+ final int seconds = !(TEST_NIGHTLY && rarely()) ? 1 : 5;\n+ final String timeout = seconds + \"s\";\n+\n // TODO: add node count randomizaion\n final List<String> nodes = startCluster(3);\n \n assertAcked(prepareCreate(\"test\")\n .setSettings(Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1 + randomInt(2))\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomInt(2))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1 + randomInt(2))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomInt(2))\n ));\n ensureGreen();\n \n@@ -455,36 +476,34 @@ public void testAckedIndexing() throws Exception {\n final Client client = client(node);\n final String name = \"indexer_\" + indexers.size();\n final int numPrimaries = getNumShards(\"test\").numPrimaries;\n- Thread thread = new Thread(new Runnable() {\n- @Override\n- public void run() {\n- while (!stop.get()) {\n- String id = null;\n+ Thread thread = new Thread(() -> {\n+ while (!stop.get()) {\n+ String id = null;\n+ try {\n+ if (!semaphore.tryAcquire(10, TimeUnit.SECONDS)) {\n+ continue;\n+ }\n+ logger.info(\"[{}] Acquired semaphore and it has {} permits left\", name, semaphore.availablePermits());\n try {\n- if (!semaphore.tryAcquire(10, TimeUnit.SECONDS)) {\n- continue;\n- }\n- logger.info(\"[{}] Acquired semaphore and it has {} permits left\", name, semaphore.availablePermits());\n- try {\n- id = Integer.toString(idGenerator.incrementAndGet());\n- int shard = Math.floorMod(Murmur3HashFunction.hash(id), numPrimaries);\n- logger.trace(\"[{}] indexing id [{}] through node [{}] targeting shard [{}]\", name, id, node, shard);\n- IndexResponse response = client.prepareIndex(\"test\", \"type\", id).setSource(\"{}\").setTimeout(\"1s\").get();\n- assertThat(response.getVersion(), equalTo(1L));\n- ackedDocs.put(id, node);\n- logger.trace(\"[{}] indexed id [{}] through node [{}]\", name, id, node);\n- } catch (ElasticsearchException e) {\n- exceptedExceptions.add(e);\n- logger.trace(\"[{}] failed id [{}] through node [{}]\", e, name, id, node);\n- } finally {\n- countDownLatchRef.get().countDown();\n- logger.trace(\"[{}] decreased counter : {}\", name, countDownLatchRef.get().getCount());\n- }\n- } catch (InterruptedException e) {\n- // fine - semaphore interrupt\n- } catch (Throwable t) {\n- logger.info(\"unexpected exception in background thread of [{}]\", t, node);\n+ id = Integer.toString(idGenerator.incrementAndGet());\n+ int shard = Math.floorMod(Murmur3HashFunction.hash(id), numPrimaries);\n+ logger.trace(\"[{}] indexing id [{}] through node [{}] targeting shard [{}]\", name, id, node, shard);\n+ IndexResponse response =\n+ client.prepareIndex(\"test\", \"type\", id).setSource(\"{}\").setTimeout(timeout).get(timeout);\n+ assertTrue(\"doc [\" + id + \"] should have been created\", response.isCreated());\n+ ackedDocs.put(id, node);\n+ logger.trace(\"[{}] indexed id [{}] through node [{}]\", name, id, node);\n+ } catch (ElasticsearchException e) {\n+ exceptedExceptions.add(e);\n+ logger.trace(\"[{}] failed id [{}] through node [{}]\", e, name, id, node);\n+ } finally {\n+ countDownLatchRef.get().countDown();\n+ logger.trace(\"[{}] decreased counter : {}\", name, countDownLatchRef.get().getCount());\n }\n+ } catch (InterruptedException e) {\n+ // fine - semaphore interrupt\n+ } catch (Throwable t) {\n+ logger.info(\"unexpected exception in background thread of [{}]\", t, node);\n }\n }\n });\n@@ -514,11 +533,15 @@ public void run() {\n assertThat(semaphore.availablePermits(), equalTo(0));\n semaphore.release(docsPerIndexer);\n }\n- assertTrue(countDownLatchRef.get().await(60000 + disruptionScheme.expectedTimeToHeal().millis() * (docsPerIndexer * indexers.size()), TimeUnit.MILLISECONDS));\n+ logger.info(\"waiting for indexing requests to complete\");\n+ assertTrue(countDownLatchRef.get().await(docsPerIndexer * seconds * 1000 + 2000, TimeUnit.MILLISECONDS));\n \n logger.info(\"stopping disruption\");\n disruptionScheme.stopDisrupting();\n- ensureStableCluster(3, TimeValue.timeValueMillis(disruptionScheme.expectedTimeToHeal().millis() + DISRUPTION_HEALING_OVERHEAD.millis()));\n+ for (String node : internalCluster().getNodeNames()) {\n+ ensureStableCluster(3, TimeValue.timeValueMillis(disruptionScheme.expectedTimeToHeal().millis() +\n+ DISRUPTION_HEALING_OVERHEAD.millis()), true, node);\n+ }\n ensureGreen(\"test\");\n \n logger.info(\"validating successful docs\");\n@@ -615,7 +638,8 @@ public void testStaleMasterNotHijackingMajority() throws Exception {\n majoritySide.remove(oldMasterNode);\n \n // Keeps track of the previous and current master when a master node transition took place on each node on the majority side:\n- final Map<String, List<Tuple<String, String>>> masters = Collections.synchronizedMap(new HashMap<String, List<Tuple<String, String>>>());\n+ final Map<String, List<Tuple<String, String>>> masters = Collections.synchronizedMap(new HashMap<String, List<Tuple<String,\n+ String>>>());\n for (final String node : majoritySide) {\n masters.put(node, new ArrayList<Tuple<String, String>>());\n internalCluster().getInstance(ClusterService.class, node).add(new ClusterStateListener() {\n@@ -624,7 +648,8 @@ public void clusterChanged(ClusterChangedEvent event) {\n DiscoveryNode previousMaster = event.previousState().nodes().getMasterNode();\n DiscoveryNode currentMaster = event.state().nodes().getMasterNode();\n if (!Objects.equals(previousMaster, currentMaster)) {\n- logger.info(\"node {} received new cluster state: {} \\n and had previous cluster state: {}\", node, event.state(), event.previousState());\n+ logger.info(\"node {} received new cluster state: {} \\n and had previous cluster state: {}\", node, event.state(),\n+ event.previousState());\n String previousMasterNodeName = previousMaster != null ? previousMaster.getName() : null;\n String currentMasterNodeName = currentMaster != null ? currentMaster.getName() : null;\n masters.get(node).add(new Tuple<>(previousMasterNodeName, currentMasterNodeName));\n@@ -656,7 +681,8 @@ public void clusterChanged(ClusterChangedEvent event) {\n // but will be queued and once the old master node un-freezes it gets executed.\n // The old master node will send this update + the cluster state where he is flagged as master to the other\n // nodes that follow the new master. These nodes should ignore this update.\n- internalCluster().getInstance(ClusterService.class, oldMasterNode).submitStateUpdateTask(\"sneaky-update\", new ClusterStateUpdateTask(Priority.IMMEDIATE) {\n+ internalCluster().getInstance(ClusterService.class, oldMasterNode).submitStateUpdateTask(\"sneaky-update\", new\n+ ClusterStateUpdateTask(Priority.IMMEDIATE) {\n @Override\n public ClusterState execute(ClusterState currentState) throws Exception {\n return ClusterState.builder(currentState).build();\n@@ -693,11 +719,16 @@ public void run() {\n for (Map.Entry<String, List<Tuple<String, String>>> entry : masters.entrySet()) {\n String nodeName = entry.getKey();\n List<Tuple<String, String>> recordedMasterTransition = entry.getValue();\n- assertThat(\"[\" + nodeName + \"] Each node should only record two master node transitions\", recordedMasterTransition.size(), equalTo(2));\n- assertThat(\"[\" + nodeName + \"] First transition's previous master should be [null]\", recordedMasterTransition.get(0).v1(), equalTo(oldMasterNode));\n- assertThat(\"[\" + nodeName + \"] First transition's current master should be [\" + newMasterNode + \"]\", recordedMasterTransition.get(0).v2(), nullValue());\n- assertThat(\"[\" + nodeName + \"] Second transition's previous master should be [null]\", recordedMasterTransition.get(1).v1(), nullValue());\n- assertThat(\"[\" + nodeName + \"] Second transition's current master should be [\" + newMasterNode + \"]\", recordedMasterTransition.get(1).v2(), equalTo(newMasterNode));\n+ assertThat(\"[\" + nodeName + \"] Each node should only record two master node transitions\", recordedMasterTransition.size(),\n+ equalTo(2));\n+ assertThat(\"[\" + nodeName + \"] First transition's previous master should be [null]\", recordedMasterTransition.get(0).v1(),\n+ equalTo(oldMasterNode));\n+ assertThat(\"[\" + nodeName + \"] First transition's current master should be [\" + newMasterNode + \"]\", recordedMasterTransition\n+ .get(0).v2(), nullValue());\n+ assertThat(\"[\" + nodeName + \"] Second transition's previous master should be [null]\", recordedMasterTransition.get(1).v1(),\n+ nullValue());\n+ assertThat(\"[\" + nodeName + \"] Second transition's current master should be [\" + newMasterNode + \"]\",\n+ recordedMasterTransition.get(1).v2(), equalTo(newMasterNode));\n }\n }\n \n@@ -710,8 +741,8 @@ public void testRejoinDocumentExistsInAllShardCopies() throws Exception {\n \n assertAcked(prepareCreate(\"test\")\n .setSettings(Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 2)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 2)\n )\n .get());\n ensureGreen(\"test\");\n@@ -727,7 +758,8 @@ public void testRejoinDocumentExistsInAllShardCopies() throws Exception {\n assertFalse(client(notIsolatedNode).admin().cluster().prepareHealth(\"test\").setWaitForYellowStatus().get().isTimedOut());\n \n \n- IndexResponse indexResponse = internalCluster().client(notIsolatedNode).prepareIndex(\"test\", \"type\").setSource(\"field\", \"value\").get();\n+ IndexResponse indexResponse = internalCluster().client(notIsolatedNode).prepareIndex(\"test\", \"type\").setSource(\"field\", \"value\")\n+ .get();\n assertThat(indexResponse.getVersion(), equalTo(1L));\n \n logger.info(\"Verifying if document exists via node[{}]\", notIsolatedNode);\n@@ -845,17 +877,21 @@ public void testClusterJoinDespiteOfPublishingIssues() throws Exception {\n \n DiscoveryNodes discoveryNodes = internalCluster().getInstance(ClusterService.class, nonMasterNode).state().nodes();\n \n- TransportService masterTranspotService = internalCluster().getInstance(TransportService.class, discoveryNodes.getMasterNode().getName());\n+ TransportService masterTranspotService =\n+ internalCluster().getInstance(TransportService.class, discoveryNodes.getMasterNode().getName());\n \n logger.info(\"blocking requests from non master [{}] to master [{}]\", nonMasterNode, masterNode);\n- MockTransportService nonMasterTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, nonMasterNode);\n+ MockTransportService nonMasterTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class,\n+ nonMasterNode);\n nonMasterTransportService.addFailToSendNoConnectRule(masterTranspotService);\n \n assertNoMaster(nonMasterNode);\n \n logger.info(\"blocking cluster state publishing from master [{}] to non master [{}]\", masterNode, nonMasterNode);\n- MockTransportService masterTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, masterNode);\n- TransportService localTransportService = internalCluster().getInstance(TransportService.class, discoveryNodes.getLocalNode().getName());\n+ MockTransportService masterTransportService =\n+ (MockTransportService) internalCluster().getInstance(TransportService.class, masterNode);\n+ TransportService localTransportService =\n+ internalCluster().getInstance(TransportService.class, discoveryNodes.getLocalNode().getName());\n if (randomBoolean()) {\n masterTransportService.addFailToSendNoConnectRule(localTransportService, PublishClusterStateAction.SEND_ACTION_NAME);\n } else {\n@@ -864,9 +900,11 @@ public void testClusterJoinDespiteOfPublishingIssues() throws Exception {\n \n logger.info(\"allowing requests from non master [{}] to master [{}], waiting for two join request\", nonMasterNode, masterNode);\n final CountDownLatch countDownLatch = new CountDownLatch(2);\n- nonMasterTransportService.addDelegate(masterTranspotService, new MockTransportService.DelegateTransport(nonMasterTransportService.original()) {\n+ nonMasterTransportService.addDelegate(masterTranspotService, new MockTransportService.DelegateTransport(nonMasterTransportService\n+ .original()) {\n @Override\n- public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {\n+ public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions\n+ options) throws IOException, TransportException {\n if (action.equals(MembershipAction.DISCOVERY_JOIN_ACTION_NAME)) {\n countDownLatch.countDown();\n }\n@@ -894,16 +932,16 @@ public void testSendingShardFailure() throws Exception {\n List<String> nonMasterNodes = nodes.stream().filter(node -> !node.equals(masterNode)).collect(Collectors.toList());\n String nonMasterNode = randomFrom(nonMasterNodes);\n assertAcked(prepareCreate(\"test\")\n- .setSettings(Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 3)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 2)\n- ));\n+ .setSettings(Settings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 3)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 2)\n+ ));\n ensureGreen();\n String nonMasterNodeId = internalCluster().clusterService(nonMasterNode).localNode().getId();\n \n // fail a random shard\n ShardRouting failedShard =\n- randomFrom(clusterService().state().getRoutingNodes().node(nonMasterNodeId).shardsWithState(ShardRoutingState.STARTED));\n+ randomFrom(clusterService().state().getRoutingNodes().node(nonMasterNodeId).shardsWithState(ShardRoutingState.STARTED));\n ShardStateAction service = internalCluster().getInstance(ShardStateAction.class, nonMasterNode);\n CountDownLatch latch = new CountDownLatch(1);\n AtomicBoolean success = new AtomicBoolean();\n@@ -912,7 +950,8 @@ public void testSendingShardFailure() throws Exception {\n NetworkPartition networkPartition = addRandomIsolation(isolatedNode);\n networkPartition.startDisrupting();\n \n- service.shardFailed(failedShard, failedShard, \"simulated\", new CorruptIndexException(\"simulated\", (String) null), new ShardStateAction.Listener() {\n+ service.shardFailed(failedShard, failedShard, \"simulated\", new CorruptIndexException(\"simulated\", (String) null), new\n+ ShardStateAction.Listener() {\n @Override\n public void onSuccess() {\n success.set(true);\n@@ -989,7 +1028,8 @@ public void testNodeNotReachableFromMaster() throws Exception {\n }\n \n logger.info(\"blocking request from master [{}] to [{}]\", masterNode, nonMasterNode);\n- MockTransportService masterTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, masterNode);\n+ MockTransportService masterTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class,\n+ masterNode);\n if (randomBoolean()) {\n masterTransportService.addUnresponsiveRule(internalCluster().getInstance(TransportService.class, nonMasterNode));\n } else {\n@@ -1021,9 +1061,9 @@ public void testSearchWithRelocationAndSlowClusterStateProcessing() throws Excep\n final String masterNode = masterNodeFuture.get();\n logger.info(\"--> creating index [test] with one shard and on replica\");\n assertAcked(prepareCreate(\"test\").setSettings(\n- Settings.builder().put(indexSettings())\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0))\n+ Settings.builder().put(indexSettings())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0))\n );\n ensureGreen(\"test\");\n \n@@ -1040,7 +1080,8 @@ public void testSearchWithRelocationAndSlowClusterStateProcessing() throws Excep\n MockTransportService transportServiceNode2 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_2);\n CountDownLatch beginRelocationLatch = new CountDownLatch(1);\n CountDownLatch endRelocationLatch = new CountDownLatch(1);\n- transportServiceNode2.addTracer(new IndicesStoreIntegrationIT.ReclocationStartEndTracer(logger, beginRelocationLatch, endRelocationLatch));\n+ transportServiceNode2.addTracer(new IndicesStoreIntegrationIT.ReclocationStartEndTracer(logger, beginRelocationLatch,\n+ endRelocationLatch));\n internalCluster().client().admin().cluster().prepareReroute().add(new MoveAllocationCommand(\"test\", 0, node_1, node_2)).get();\n // wait for relocation to start\n beginRelocationLatch.await();\n@@ -1177,7 +1218,8 @@ public void run() {\n assertNull(\"node [\" + node + \"] still has [\" + state.nodes().getMasterNode() + \"] as master\", state.nodes().getMasterNode());\n if (expectedBlocks != null) {\n for (ClusterBlockLevel level : expectedBlocks.levels()) {\n- assertTrue(\"node [\" + node + \"] does have level [\" + level + \"] in it's blocks\", state.getBlocks().hasGlobalBlock(level));\n+ assertTrue(\"node [\" + node + \"] does have level [\" + level + \"] in it's blocks\", state.getBlocks().hasGlobalBlock\n+ (level));\n }\n }\n }", "filename": "core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.transport.DummyTransportAddress;\n import org.elasticsearch.discovery.zen.ping.ZenPing;\n import org.elasticsearch.test.ESTestCase;\n@@ -64,7 +65,7 @@ public void testShouldIgnoreNewClusterState() {\n assertTrue(\"should ignore, because new state's version is lower to current state's version\", shouldIgnoreOrRejectNewClusterState(logger, currentState.build(), newState.build()));\n currentState.version(1);\n newState.version(1);\n- assertFalse(\"should not ignore, because new state's version is equal to current state's version\", shouldIgnoreOrRejectNewClusterState(logger, currentState.build(), newState.build()));\n+ assertTrue(\"should ignore, because new state's version is equal to current state's version\", shouldIgnoreOrRejectNewClusterState(logger, currentState.build(), newState.build()));\n currentState.version(1);\n newState.version(2);\n assertFalse(\"should not ignore, because new state's version is higher to current state's version\", shouldIgnoreOrRejectNewClusterState(logger, currentState.build(), newState.build()));", "filename": "core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryUnitTests.java", "status": "modified" }, { "diff": "@@ -195,10 +195,11 @@ public void testQueueStats() {\n highestCommitted = context.state;\n }\n }\n+ assert highestCommitted != null;\n \n queue.markAsProcessed(highestCommitted);\n- assertThat(queue.stats().getTotal(), equalTo(states.size() - committedContexts.size()));\n- assertThat(queue.stats().getPending(), equalTo(states.size() - committedContexts.size()));\n+ assertThat((long)queue.stats().getTotal(), equalTo(states.size() - (1 + highestCommitted.version())));\n+ assertThat((long)queue.stats().getPending(), equalTo(states.size() - (1 + highestCommitted.version())));\n assertThat(queue.stats().getCommitted(), equalTo(0));\n }\n ", "filename": "core/src/test/java/org/elasticsearch/discovery/zen/publish/PendingClusterStatesQueueTests.java", "status": "modified" }, { "diff": "@@ -63,16 +63,20 @@\n import java.util.Collections;\n import java.util.HashMap;\n import java.util.List;\n+import java.util.Locale;\n import java.util.Map;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicReference;\n+import java.util.function.Supplier;\n \n+import static org.hamcrest.CoreMatchers.instanceOf;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.emptyIterable;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.hasToString;\n import static org.hamcrest.Matchers.not;\n import static org.hamcrest.Matchers.notNullValue;\n import static org.hamcrest.Matchers.nullValue;\n@@ -159,7 +163,7 @@ public MockNode createMockNode(String name, Settings settings, Version version,\n DiscoveryNodeService discoveryNodeService = new DiscoveryNodeService(settings, version);\n DiscoveryNode discoveryNode = discoveryNodeService.buildLocalNode(service.boundAddress().publishAddress());\n MockNode node = new MockNode(discoveryNode, service, listener, logger);\n- node.action = buildPublishClusterStateAction(settings, service, node, node);\n+ node.action = buildPublishClusterStateAction(settings, service, () -> node.clusterState, node);\n final CountDownLatch latch = new CountDownLatch(nodes.size() * 2 + 1);\n TransportConnectionListener waitForConnection = new TransportConnectionListener() {\n @Override\n@@ -231,10 +235,21 @@ protected MockTransportService buildTransportService(Settings settings, Version\n return transportService;\n }\n \n- protected MockPublishAction buildPublishClusterStateAction(Settings settings, MockTransportService transportService, DiscoveryNodesProvider nodesProvider,\n- PublishClusterStateAction.NewPendingClusterStateListener listener) {\n- DiscoverySettings discoverySettings = new DiscoverySettings(settings, new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS));\n- return new MockPublishAction(settings, transportService, nodesProvider, listener, discoverySettings, ClusterName.DEFAULT);\n+ protected MockPublishAction buildPublishClusterStateAction(\n+ Settings settings,\n+ MockTransportService transportService,\n+ Supplier<ClusterState> clusterStateSupplier,\n+ PublishClusterStateAction.NewPendingClusterStateListener listener\n+ ) {\n+ DiscoverySettings discoverySettings =\n+ new DiscoverySettings(settings, new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS));\n+ return new MockPublishAction(\n+ settings,\n+ transportService,\n+ clusterStateSupplier,\n+ listener,\n+ discoverySettings,\n+ ClusterName.DEFAULT);\n }\n \n public void testSimpleClusterStatePublishing() throws Exception {\n@@ -596,18 +611,20 @@ public void testIncomingClusterStateValidation() throws Exception {\n node.action.validateIncomingState(state, node.clusterState);\n fail(\"node accepted state from another master\");\n } catch (IllegalStateException OK) {\n+ assertThat(OK.toString(), containsString(\"cluster state from a different master than the current one, rejecting\"));\n }\n \n logger.info(\"--> test state from the current master is accepted\");\n node.action.validateIncomingState(ClusterState.builder(node.clusterState)\n- .nodes(DiscoveryNodes.builder(node.nodes()).masterNodeId(\"master\")).build(), node.clusterState);\n+ .nodes(DiscoveryNodes.builder(node.nodes()).masterNodeId(\"master\")).incrementVersion().build(), node.clusterState);\n \n \n logger.info(\"--> testing rejection of another cluster name\");\n try {\n node.action.validateIncomingState(ClusterState.builder(new ClusterName(randomAsciiOfLength(10))).nodes(node.nodes()).build(), node.clusterState);\n fail(\"node accepted state with another cluster name\");\n } catch (IllegalStateException OK) {\n+ assertThat(OK.toString(), containsString(\"received state from a node that is not part of the cluster\"));\n }\n \n logger.info(\"--> testing rejection of a cluster state with wrong local node\");\n@@ -618,6 +635,7 @@ public void testIncomingClusterStateValidation() throws Exception {\n node.action.validateIncomingState(state, node.clusterState);\n fail(\"node accepted state with non-existence local node\");\n } catch (IllegalStateException OK) {\n+ assertThat(OK.toString(), containsString(\"received state with a local node that does not match the current local node\"));\n }\n \n try {\n@@ -628,12 +646,22 @@ public void testIncomingClusterStateValidation() throws Exception {\n node.action.validateIncomingState(state, node.clusterState);\n fail(\"node accepted state with existent but wrong local node\");\n } catch (IllegalStateException OK) {\n+ assertThat(OK.toString(), containsString(\"received state with a local node that does not match the current local node\"));\n }\n \n logger.info(\"--> testing acceptance of an old cluster state\");\n- state = node.clusterState;\n+ final ClusterState incomingState = node.clusterState;\n node.clusterState = ClusterState.builder(node.clusterState).incrementVersion().build();\n- node.action.validateIncomingState(state, node.clusterState);\n+ final IllegalStateException e =\n+ expectThrows(IllegalStateException.class, () -> node.action.validateIncomingState(incomingState, node.clusterState));\n+ final String message = String.format(\n+ Locale.ROOT,\n+ \"rejecting cluster state version [%d] uuid [%s] received from [%s]\",\n+ incomingState.version(),\n+ incomingState.stateUUID(),\n+ incomingState.nodes().getMasterNodeId()\n+ );\n+ assertThat(e, hasToString(\"java.lang.IllegalStateException: \" + message));\n \n // an older version from a *new* master is also OK!\n ClusterState previousState = ClusterState.builder(node.clusterState).incrementVersion().build();\n@@ -646,18 +674,17 @@ public void testIncomingClusterStateValidation() throws Exception {\n node.action.validateIncomingState(state, previousState);\n }\n \n- public void testInterleavedPublishCommit() throws Throwable {\n+ public void testOutOfOrderCommitMessages() throws Throwable {\n MockNode node = createMockNode(\"node\").setAsMaster();\n final CapturingTransportChannel channel = new CapturingTransportChannel();\n \n List<ClusterState> states = new ArrayList<>();\n- final int numOfStates = scaledRandomIntBetween(3, 10);\n+ final int numOfStates = scaledRandomIntBetween(3, 25);\n for (int i = 1; i <= numOfStates; i++) {\n states.add(ClusterState.builder(node.clusterState).version(i).stateUUID(ClusterState.UNKNOWN_UUID).build());\n }\n \n final ClusterState finalState = states.get(numOfStates - 1);\n- Collections.shuffle(states, random());\n \n logger.info(\"--> publishing states\");\n for (ClusterState state : states) {\n@@ -667,19 +694,28 @@ public void testInterleavedPublishCommit() throws Throwable {\n assertThat(channel.response.get(), equalTo((TransportResponse) TransportResponse.Empty.INSTANCE));\n assertThat(channel.error.get(), nullValue());\n channel.clear();\n+\n }\n \n logger.info(\"--> committing states\");\n \n+ long largestVersionSeen = Long.MIN_VALUE;\n Randomness.shuffle(states);\n for (ClusterState state : states) {\n node.action.handleCommitRequest(new PublishClusterStateAction.CommitClusterStateRequest(state.stateUUID()), channel);\n- assertThat(channel.response.get(), equalTo((TransportResponse) TransportResponse.Empty.INSTANCE));\n- if (channel.error.get() != null) {\n- throw channel.error.get();\n+ if (largestVersionSeen < state.getVersion()) {\n+ assertThat(channel.response.get(), equalTo((TransportResponse) TransportResponse.Empty.INSTANCE));\n+ if (channel.error.get() != null) {\n+ throw channel.error.get();\n+ }\n+ largestVersionSeen = state.getVersion();\n+ } else {\n+ // older cluster states will be rejected\n+ assertNotNull(channel.error.get());\n+ assertThat(channel.error.get(), instanceOf(IllegalStateException.class));\n }\n+ channel.clear();\n }\n- channel.clear();\n \n //now check the last state held\n assertSameState(node.clusterState, finalState);\n@@ -817,8 +853,8 @@ static class MockPublishAction extends PublishClusterStateAction {\n AtomicBoolean timeoutOnCommit = new AtomicBoolean();\n AtomicBoolean errorOnCommit = new AtomicBoolean();\n \n- public MockPublishAction(Settings settings, TransportService transportService, DiscoveryNodesProvider nodesProvider, NewPendingClusterStateListener listener, DiscoverySettings discoverySettings, ClusterName clusterName) {\n- super(settings, transportService, nodesProvider, listener, discoverySettings, clusterName);\n+ public MockPublishAction(Settings settings, TransportService transportService, Supplier<ClusterState> clusterStateSupplier, NewPendingClusterStateListener listener, DiscoverySettings discoverySettings, ClusterName clusterName) {\n+ super(settings, transportService, clusterStateSupplier, listener, discoverySettings, clusterName);\n }\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/discovery/zen/publish/PublishClusterStateActionTests.java", "status": "modified" }, { "diff": "@@ -309,8 +309,8 @@ public InternalTestCluster(String nodeMode, long clusterSeed, Path baseDir,\n builder.put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_INCOMING_RECOVERIES_SETTING.getKey(), RandomInts.randomIntBetween(random, 5, 10));\n builder.put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_OUTGOING_RECOVERIES_SETTING.getKey(), RandomInts.randomIntBetween(random, 5, 10));\n } else if (random.nextInt(100) <= 90) {\n- builder.put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_INCOMING_RECOVERIES_SETTING.getKey(), RandomInts.randomIntBetween(random, 2, 5));\n- builder.put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_OUTGOING_RECOVERIES_SETTING.getKey(), RandomInts.randomIntBetween(random, 2, 5));\n+ builder.put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_INCOMING_RECOVERIES_SETTING.getKey(), RandomInts.randomIntBetween(random, 2, 5));\n+ builder.put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_OUTGOING_RECOVERIES_SETTING.getKey(), RandomInts.randomIntBetween(random, 2, 5));\n }\n // always reduce this - it can make tests really slow\n builder.put(RecoverySettings.INDICES_RECOVERY_RETRY_DELAY_STATE_SYNC_SETTING.getKey(), TimeValue.timeValueMillis(RandomInts.randomIntBetween(random, 20, 50)));\n@@ -544,7 +544,7 @@ public synchronized void ensureAtMostNumDataNodes(int n) throws IOException {\n logger.info(\"changing cluster size from {} to {}, {} data nodes\", size(), n + numShareCoordOnlyNodes, n);\n Set<NodeAndClient> nodesToRemove = new HashSet<>();\n int numNodesAndClients = 0;\n- while (values.hasNext() && numNodesAndClients++ < size-n) {\n+ while (values.hasNext() && numNodesAndClients++ < size - n) {\n NodeAndClient next = values.next();\n nodesToRemove.add(next);\n removeDisruptionSchemeFromNode(next);", "filename": "test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 2.2.0\n\n**JVM version**: 1.8.0_40\n\n**OS version**: OSX and Ubuntu 15.04\n\n**Description of the problem including expected versus actual behavior**:\nUnless I'm completely and utterly blind (which is a distinct possibility), it seems that both indexing and query slowlogs no longer include the name of the index that the operation was running against. This makes them almost entirely useless for us (since we have several hundred indices in our cluster).\n\n**Steps to reproduce**:\n1. Generate some queries / indexing requests that triggers a slowlog.\n2. Look at the slowlog\n3. Scratch your head and wonder why the `type` is present, but not the `index`.\n\n**Provide logs (if relevant)**:\n\nAn example slowlog from 2.2.0:\n\n```\n[2016-03-09 16:26:15,225][TRACE][index.indexing.slowlog.index] took[4.4ms], took_millis[4], type[test], id[AVNaDew0JKS5u4ProrcX], routing[] , source[{\"name\":\"vlucas/phpdotenv\",\"version\":\"1.0.3\",\"type\":\"library\",\"description\":\"Loads environment variables from `.env` to `getenv()`, `$_ENV` and `$_SERVER` automagically.\",\"keywords\":[\"env\",\"dotenv\",\"environment\"],\"homepage\":\"http://github.com/vlucas/phpdotenv\",\"license\":\"BSD\",\"authors\":[{\"name\":\"Vance Lucas\",\"email\":\"vance@vancelucas.com\",\"homepage\":\"http://www.vancelucas.com\"}],\"require\":{\"php\":\">=5.3.2\"},\"require-dev\":{\"phpunit/phpunit\":\"*\"},\"autoload\":{\"psr-0\":{\"Dotenv\":\"src/\"}}}]\n```\n\nAn example slowlog from 1.7.5:\n\n```\n[2016-03-09 16:31:42,346][WARN ][index.indexing.slowlog.index] [Prowler] [slowtest][1] took[95.7ms], took_millis[95], type[test], id[AVNaEumoVIxyp_lAGADK], routing[], source[{\"name\":\"vlucas/phpdotenv\",\"version\":\"1.0.3\",\"type\":\"library\",\"description\":\"Loads environment variables from `.env` to `getenv()`, `$_ENV` and `$_SERVER` automagically.\",\"keywords\":[\"env\",\"dotenv\",\"environment\"],\"homepage\":\"http://github.com/vlucas/phpdotenv\",\"license\":\"BSD\",\"authors\":[{\"name\":\"Vance Lucas\",\"email\":\"vance@vancelucas.com\",\"homepage\":\"http://www.vancelucas.com\"}],\"require\":{\"php\":\">=5.3.2\"},\"require-dev\":{\"phpunit/phpunit\":\"*\"},\"autoload\":{\"psr-0\":{\"Dotenv\":\"src/\"}}}]\n```\n", "comments": [ { "body": "I am looking into this - thanks for reporting\n", "created_at": "2016-03-09T08:33:48Z" }, { "body": "this will be in the upcoming 2.2.1 release - thanks for reporting\n", "created_at": "2016-03-09T09:12:51Z" }, { "body": ":+1: thanks for the über fast response! :)\n", "created_at": "2016-03-09T11:52:21Z" }, { "body": "@s1monw we need to re-open this one, 2.2.1 still has no index name logged in _query_ slowlogs. By the looks of things your commit fixed indexing slowlogs, but I see no mention of query slowlogs.\n", "created_at": "2016-04-06T04:47:04Z" }, { "body": "Need to make the same change for query slowlogs.\n", "created_at": "2016-04-06T11:45:16Z" } ], "number": 17025, "title": "Slowlogs no longer include index name" }
{ "body": "This was lost in refactoring even on the 2.x branch. The slow-log\nis not per index not per shard anymore such that we don't add the\nshard ID as the logger prefix. This commit adds back the index\nname as part of the logging message not as a prefix on the logger\nfor better testabilitly.\n\nCloses #17025\n", "number": 17026, "review_comments": [], "title": "Add missing index name to indexing slow log" }
{ "commits": [ { "message": "Add missing index name to indexing slow log\n\nThis was lost in refactoring even on the 2.x branch. The slow-log\nis not per index not per shard anymore such that we don't add the\nshard ID as the logger prefix. This commit adds back the index\nname as part of the logging message not as a prefix on the logger\nfor better testabilitly.\n\nCloses #17025" } ], "files": [ { "diff": "@@ -36,6 +36,7 @@\n /**\n */\n public final class IndexingSlowLog implements IndexingOperationListener {\n+ private final Index index;\n private boolean reformat;\n private long indexWarnThreshold;\n private long indexInfoThreshold;\n@@ -85,6 +86,7 @@ public final class IndexingSlowLog implements IndexingOperationListener {\n IndexingSlowLog(IndexSettings indexSettings, ESLogger indexLogger, ESLogger deleteLogger) {\n this.indexLogger = indexLogger;\n this.deleteLogger = deleteLogger;\n+ this.index = indexSettings.getIndex();\n \n indexSettings.getScopedSettings().addSettingsUpdateConsumer(INDEX_INDEXING_SLOWLOG_REFORMAT_SETTING, this::setReformat);\n this.reformat = indexSettings.getValue(INDEX_INDEXING_SLOWLOG_REFORMAT_SETTING);\n@@ -141,13 +143,13 @@ public void postIndex(Engine.Index index) {\n \n private void postIndexing(ParsedDocument doc, long tookInNanos) {\n if (indexWarnThreshold >= 0 && tookInNanos > indexWarnThreshold) {\n- indexLogger.warn(\"{}\", new SlowLogParsedDocumentPrinter(doc, tookInNanos, reformat, maxSourceCharsToLog));\n+ indexLogger.warn(\"{}\", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog));\n } else if (indexInfoThreshold >= 0 && tookInNanos > indexInfoThreshold) {\n- indexLogger.info(\"{}\", new SlowLogParsedDocumentPrinter(doc, tookInNanos, reformat, maxSourceCharsToLog));\n+ indexLogger.info(\"{}\", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog));\n } else if (indexDebugThreshold >= 0 && tookInNanos > indexDebugThreshold) {\n- indexLogger.debug(\"{}\", new SlowLogParsedDocumentPrinter(doc, tookInNanos, reformat, maxSourceCharsToLog));\n+ indexLogger.debug(\"{}\", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog));\n } else if (indexTraceThreshold >= 0 && tookInNanos > indexTraceThreshold) {\n- indexLogger.trace(\"{}\", new SlowLogParsedDocumentPrinter(doc, tookInNanos, reformat, maxSourceCharsToLog));\n+ indexLogger.trace(\"{}\", new SlowLogParsedDocumentPrinter(index, doc, tookInNanos, reformat, maxSourceCharsToLog));\n }\n }\n \n@@ -156,9 +158,11 @@ static final class SlowLogParsedDocumentPrinter {\n private final long tookInNanos;\n private final boolean reformat;\n private final int maxSourceCharsToLog;\n+ private final Index index;\n \n- SlowLogParsedDocumentPrinter(ParsedDocument doc, long tookInNanos, boolean reformat, int maxSourceCharsToLog) {\n+ SlowLogParsedDocumentPrinter(Index index, ParsedDocument doc, long tookInNanos, boolean reformat, int maxSourceCharsToLog) {\n this.doc = doc;\n+ this.index = index;\n this.tookInNanos = tookInNanos;\n this.reformat = reformat;\n this.maxSourceCharsToLog = maxSourceCharsToLog;\n@@ -167,6 +171,7 @@ static final class SlowLogParsedDocumentPrinter {\n @Override\n public String toString() {\n StringBuilder sb = new StringBuilder();\n+ sb.append(index).append(\" \");\n sb.append(\"took[\").append(TimeValue.timeValueNanos(tookInNanos)).append(\"], took_millis[\").append(TimeUnit.NANOSECONDS.toMillis(tookInNanos)).append(\"], \");\n sb.append(\"type[\").append(doc.type()).append(\"], \");\n sb.append(\"id[\").append(doc.id()).append(\"], \");", "filename": "core/src/main/java/org/elasticsearch/index/IndexingSlowLog.java", "status": "modified" }, { "diff": "@@ -36,24 +36,30 @@\n \n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.not;\n+import static org.hamcrest.Matchers.startsWith;\n \n public class IndexingSlowLogTests extends ESTestCase {\n public void testSlowLogParsedDocumentPrinterSourceToLog() throws IOException {\n BytesReference source = JsonXContent.contentBuilder().startObject().field(\"foo\", \"bar\").endObject().bytes();\n ParsedDocument pd = new ParsedDocument(new StringField(\"uid\", \"test:id\", Store.YES), new LegacyIntField(\"version\", 1, Store.YES), \"id\",\n \"test\", null, 0, -1, null, source, null);\n-\n+ Index index = new Index(\"foo\", \"123\");\n // Turning off document logging doesn't log source[]\n- SlowLogParsedDocumentPrinter p = new SlowLogParsedDocumentPrinter(pd, 10, true, 0);\n+ SlowLogParsedDocumentPrinter p = new SlowLogParsedDocumentPrinter(index, pd, 10, true, 0);\n assertThat(p.toString(), not(containsString(\"source[\")));\n \n // Turning on document logging logs the whole thing\n- p = new SlowLogParsedDocumentPrinter(pd, 10, true, Integer.MAX_VALUE);\n+ p = new SlowLogParsedDocumentPrinter(index, pd, 10, true, Integer.MAX_VALUE);\n assertThat(p.toString(), containsString(\"source[{\\\"foo\\\":\\\"bar\\\"}]\"));\n \n // And you can truncate the source\n- p = new SlowLogParsedDocumentPrinter(pd, 10, true, 3);\n+ p = new SlowLogParsedDocumentPrinter(index, pd, 10, true, 3);\n+ assertThat(p.toString(), containsString(\"source[{\\\"f]\"));\n+\n+ // And you can truncate the source\n+ p = new SlowLogParsedDocumentPrinter(index, pd, 10, true, 3);\n assertThat(p.toString(), containsString(\"source[{\\\"f]\"));\n+ assertThat(p.toString(), startsWith(\"[foo/123] took\"));\n }\n \n public void testReformatSetting() {", "filename": "core/src/test/java/org/elasticsearch/index/IndexingSlowLogTests.java", "status": "modified" } ] }
{ "body": "With the following configuration\n\n```\nnode.name: ${prompt.text}\n```\n\nElasticsearch 1.6.0 prompts you twice. Once for \"node.name\" and \"name\". Ultimately it uses the second one for the value of the configuration item:\n\n```\ndjschny:elasticsearch-1.6.0 djschny$ ../startElastic.sh \nEnter value for [node.name]: foo\nEnter value for [name]: bar\n[2015-06-09 16:10:03,405][INFO ][node ] [bar] version[1.6.0], pid[4836], build[cdd3ac4/2015-06-09T13:36:34Z]\n[2015-06-09 16:10:03,405][INFO ][node ] [bar] initializing ...\n```\n", "comments": [ { "body": "@jaymode please could you take a look\n", "created_at": "2015-06-12T14:04:01Z" }, { "body": "Looks to be still a problem on 2.1. Regression here? @GlenRSmith and I can confirm this occurs with the 2.1 release archive.\n", "created_at": "2015-12-17T23:47:41Z" }, { "body": "It seems to be independent of which config item you want to prompt for:\n\n```\nelasticsearch-2.1.0  bin/elasticsearch\nEnter value for [cluster.name]: foo\nEnter value for [cluster.name]: bar\n[2015-12-18 10:48:50,219][INFO ][node ] [Wendell Vaughn] version[2.1.0], pid[21231], build[72cd1f1/2015-11-18T22:40:03Z]\n[2015-12-18 10:48:50,220][INFO ][node ] [Wendell Vaughn] initializing ...\n[2015-12-18 10:48:50,280][INFO ][plugins ] [Wendell Vaughn] loaded [], sites []\n[2015-12-18 10:48:50,304][INFO ][env ] [Wendell Vaughn] using [1] data paths, mounts [[/ (/dev/mapper/fedora_josh--xps13-root)]], net usable_space [79.1gb], net total_space [233.9gb], spins? [no], types [ext4]\n[2015-12-18 10:48:51,761][INFO ][node ] [Wendell Vaughn] initialized\n[2015-12-18 10:48:51,762][INFO ][node ] [Wendell Vaughn] starting ...\n[2015-12-18 10:48:51,902][INFO ][transport ] [Wendell Vaughn] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}\n[2015-12-18 10:48:51,912][INFO ][discovery ] [Wendell Vaughn] bar/azOFxYXMQ6muCpGOpqvxSw\n```\n", "created_at": "2015-12-17T23:49:53Z" }, { "body": "Just confirmed on 2.1.1.\n", "created_at": "2015-12-18T00:00:40Z" }, { "body": "This is different than the original issue. I think what is happening is we first initialize the settings/environment in bootstrap so that we can eg init logging, but then when bootstrap creates the node, it passes in a fresh Settings, and the Node constructor again initializes settings/environment.\n", "created_at": "2015-12-18T06:52:08Z" }, { "body": "@jaymode could you take a look at this please?\n", "created_at": "2016-01-18T20:16:45Z" }, { "body": "As @rjernst said, this is a different issue that the original. `BootstrapCLIParser` was added which extends CLITool. CLITool prepares the settings and environment which includes prompting since CLITools are usually run outside of the bootstrap process. This causes the first prompt that is ignored. The second prompt that is used, comes from `Bootstrap`, which is passed to the node.\n\nPart of the issue is that the `BootstrapCLIParser` sets properties that will change the value of settings. I think we can solve this a few different ways:\n1. Pass in empty settings/null environment for this CLITool. If we try to create a valid environment here then we have to prepare the settings to ensure we parse the paths from the settings for our directories\n2. Do not use the `CLITool` infrastructure\n3. Prepare the environment in bootstrap, pass to `BootstrapCLIParser`. Re-prepare the settings/environment, passing in the already prepared settings.\n\n@spinscale @rjernst any thoughts?\n", "created_at": "2016-01-19T15:07:27Z" }, { "body": "I would say preparing the environment once would be the best option, it will just take some refactoring to pass it through. Running prepare already creates Environment twice (the first time so it can try and load the config file, which might have other paths like plugin path or data path). Really I think we should simply not allow paths to be configured in elasticsearch.yml, instead it should only be through sysprops. It might be something that could simplify the settings/env prep, to make this a little easier.\n", "created_at": "2016-01-19T19:46:38Z" }, { "body": "I have a fix for this that will come as part of #16579.\n", "created_at": "2016-02-24T02:40:12Z" }, { "body": "Closed by #17024.\n", "created_at": "2016-03-14T00:07:24Z" } ], "number": 11564, "title": "${prompt.text} and ${prompt.secret} double prompting" }
{ "body": "We currently use commons-cli for command line parsing, along with complex abstractions built around it. This change simplifies how command line parsing is done, switching to the jopt-simple library.\n\nSome notes:\n- bin/elasticsearch no longer supports --foo=bar. It still supports -Dfoo=bar. This is a large simplification.\n- Removed the attachments runner cli. This is no longer necessary since it is now in ingest, and can be simulated there.\n\nCloses #11564\n", "number": 17024, "review_comments": [ { "body": "Since it's static now, change the name to `WRITER`?\n", "created_at": "2016-03-11T17:50:26Z" }, { "body": "I'm just curious why these needed to be added to the excludes? I get that the dependency is gone (yay!).\n", "created_at": "2016-03-11T17:58:01Z" }, { "body": "Because groovy depends on commons cli but we don't include it.\n", "created_at": "2016-03-11T18:06:23Z" }, { "body": "Shouldn't these be `-D` for now? I expect this test to _not_ pass right now?\n", "created_at": "2016-03-11T18:15:09Z" }, { "body": "Same thing here, `-D`?\n", "created_at": "2016-03-11T18:19:26Z" }, { "body": "Can you add a comment saying that? It'll make whoever touches this six months from now very happy.\n", "created_at": "2016-03-11T18:20:17Z" }, { "body": "Can you add a test for an unrecognized option, such as `--network.host`? :wink: \n", "created_at": "2016-03-11T18:20:18Z" }, { "body": "Sorry, I get _that_ part, I thought that Groovy jarjar'ed the CLI dependency though? \n", "created_at": "2016-03-11T18:26:45Z" }, { "body": "Such as this from their [packaging](https://github.com/apache/groovy/blob/6a9453534ed35eef06a2f481f88776acad3c4af6/gradle/assemble.gradle#L182-L198): \n\n``` gradle\n if (isRoot) {\n configurations.runtime.files.findAll { file ->\n ['antlr', 'asm', 'commons-cli'].any {\n file.name.startsWith(it)\n } && ['asm-attr', 'asm-util', 'asm-analysis'].every { !file.name.startsWith(it) }\n }.each { jarjarFile ->\n // GROOVY-7386 : excludes needed below to stop copy of incorrect maven meta info\n zipfileset(src: jarjarFile, excludes: 'META-INF/maven/commons-cli/commons-cli/*,META-INF/*')\n }\n\n zipfileset(src: configurations.runtime.files.find { file -> file.name.startsWith('asm-util') },\n includes: 'org/objectweb/asm/util/Printer.class,org/objectweb/asm/util/Textifier.class,org/objectweb/asm/util/Trace*')\n }\n rule pattern: 'antlr.**', result: 'groovyjarjarantlr.@1'\n rule pattern: 'org.objectweb.**', result: 'groovyjarjarasm.@1'\n rule pattern: 'org.apache.commons.cli.**', result: 'groovyjarjarcommonscli.@1'\n```\n", "created_at": "2016-03-11T18:30:40Z" }, { "body": "Yeah, I hadn't run the tests since switching back to `-D`. :)\n", "created_at": "2016-03-11T18:52:52Z" }, { "body": "I ran jdeps against the 2.4.6 indy jar, and there are some references to the jarjar version of commons cli, and some to the real cli package. So it would appear their packaging has problems...\n", "created_at": "2016-03-11T19:44:40Z" }, { "body": "> So it would appear their packaging has problems...\n\nPackaging is hard!\n", "created_at": "2016-03-11T19:51:15Z" }, { "body": "_Sigh_. Thanks for double checking.\n", "created_at": "2016-03-11T19:51:26Z" }, { "body": "+1\n", "created_at": "2016-03-11T19:51:57Z" }, { "body": "+1\n", "created_at": "2016-03-11T19:52:08Z" } ], "title": "Cli: Switch to jopt-simple" }
{ "commits": [ { "message": "Added jopt simple option parser and switched plugin cli to use it" }, { "message": "Removed old help files and improved plugin cli tests" }, { "message": "Removed check file command tests, check file command is going away" }, { "message": "Catch option error during execution too, since OptionSet is passed there" }, { "message": "Merge branch 'master' into cli-parsing" }, { "message": "Merge branch 'master' into cli-parsing" }, { "message": "Moved MockTerminal and created a base test case for cli commands." }, { "message": "Convert bootstrapcli parser to jopt-simple" }, { "message": "Remove reference to standalonerunner" }, { "message": "Removed old cli stuff, and add tests for new Command behavior" }, { "message": "Remove old help files" }, { "message": "Merge branch 'master' into cli-parsing" }, { "message": "More tests" }, { "message": "Remove old commons-cli dep" }, { "message": "Remove commons-cli sha and add jopt-simple sha" }, { "message": "Fix precommit" }, { "message": "Fix file rename to match class name" }, { "message": "Fix more licenses" }, { "message": "Merge branch 'master' into cli-parsing" }, { "message": "Addressed PR feedback\n* Fix tests still referring to -E\n* add comment about missing classes\n* rename writer constant" } ], "files": [ { "diff": "@@ -49,7 +49,7 @@ dependencies {\n compile 'org.elasticsearch:securesm:1.0'\n \n // utilities\n- compile 'commons-cli:commons-cli:1.3.1'\n+ compile 'net.sf.jopt-simple:jopt-simple:4.9'\n compile 'com.carrotsearch:hppc:0.7.1'\n \n // time handling, remove with java 8 time", "filename": "core/build.gradle", "status": "modified" }, { "diff": "@@ -19,15 +19,22 @@\n \n package org.elasticsearch.bootstrap;\n \n+import java.io.ByteArrayOutputStream;\n+import java.io.IOException;\n+import java.io.PrintStream;\n+import java.nio.file.Path;\n+import java.util.Locale;\n+import java.util.concurrent.CountDownLatch;\n+\n import org.apache.lucene.util.Constants;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.StringHelper;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n+import org.elasticsearch.cli.ExitCodes;\n+import org.elasticsearch.cli.Terminal;\n import org.elasticsearch.common.PidFile;\n import org.elasticsearch.common.SuppressForbidden;\n-import org.elasticsearch.common.cli.CliTool;\n-import org.elasticsearch.common.cli.Terminal;\n import org.elasticsearch.common.inject.CreationException;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.LogConfigurator;\n@@ -40,13 +47,6 @@\n import org.elasticsearch.node.Node;\n import org.elasticsearch.node.internal.InternalSettingsPreparer;\n \n-import java.io.ByteArrayOutputStream;\n-import java.io.IOException;\n-import java.io.PrintStream;\n-import java.nio.file.Path;\n-import java.util.Locale;\n-import java.util.concurrent.CountDownLatch;\n-\n import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS;\n \n /**\n@@ -222,11 +222,11 @@ static void init(String[] args) throws Throwable {\n // Set the system property before anything has a chance to trigger its use\n initLoggerPrefix();\n \n- BootstrapCLIParser bootstrapCLIParser = new BootstrapCLIParser();\n- CliTool.ExitStatus status = bootstrapCLIParser.execute(args);\n+ BootstrapCliParser parser = new BootstrapCliParser();\n+ int status = parser.main(args, Terminal.DEFAULT);\n \n- if (CliTool.ExitStatus.OK != status) {\n- exit(status.status());\n+ if (parser.shouldRun() == false || status != ExitCodes.OK) {\n+ exit(status);\n }\n \n INSTANCE = new Bootstrap();\n@@ -307,14 +307,6 @@ private static void closeSysError() {\n System.err.close();\n }\n \n- @SuppressForbidden(reason = \"System#err\")\n- private static void sysError(String line, boolean flush) {\n- System.err.println(line);\n- if (flush) {\n- System.err.flush();\n- }\n- }\n-\n private static void checkForCustomConfFile() {\n String confFileSetting = System.getProperty(\"es.default.config\");\n checkUnsetAndMaybeExit(confFileSetting, \"es.default.config\");", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java", "status": "modified" }, { "diff": "@@ -0,0 +1,95 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.bootstrap;\n+\n+import java.util.Arrays;\n+\n+import joptsimple.OptionSet;\n+import joptsimple.OptionSpec;\n+import org.elasticsearch.Build;\n+import org.elasticsearch.cli.Command;\n+import org.elasticsearch.cli.ExitCodes;\n+import org.elasticsearch.cli.UserError;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.cli.Terminal;\n+import org.elasticsearch.common.SuppressForbidden;\n+import org.elasticsearch.monitor.jvm.JvmInfo;\n+\n+final class BootstrapCliParser extends Command {\n+\n+ private final OptionSpec<Void> versionOption;\n+ private final OptionSpec<Void> daemonizeOption;\n+ private final OptionSpec<String> pidfileOption;\n+ private final OptionSpec<String> propertyOption;\n+ private boolean shouldRun = false;\n+\n+ BootstrapCliParser() {\n+ super(\"Starts elasticsearch\");\n+ // TODO: in jopt-simple 5.0, make this mutually exclusive with all other options\n+ versionOption = parser.acceptsAll(Arrays.asList(\"V\", \"version\"),\n+ \"Prints elasticsearch version information and exits\");\n+ daemonizeOption = parser.acceptsAll(Arrays.asList(\"d\", \"daemonize\"),\n+ \"Starts Elasticsearch in the background\");\n+ // TODO: in jopt-simple 5.0 this option type can be a Path\n+ pidfileOption = parser.acceptsAll(Arrays.asList(\"p\", \"pidfile\"),\n+ \"Creates a pid file in the specified path on start\")\n+ .withRequiredArg();\n+ propertyOption = parser.accepts(\"D\", \"Configures an Elasticsearch setting\")\n+ .withRequiredArg();\n+ }\n+\n+ // TODO: don't use system properties as a way to do this, its horrible...\n+ @SuppressForbidden(reason = \"Sets system properties passed as CLI parameters\")\n+ @Override\n+ protected void execute(Terminal terminal, OptionSet options) throws Exception {\n+ if (options.has(versionOption)) {\n+ terminal.println(\"Version: \" + org.elasticsearch.Version.CURRENT\n+ + \", Build: \" + Build.CURRENT.shortHash() + \"/\" + Build.CURRENT.date()\n+ + \", JVM: \" + JvmInfo.jvmInfo().version());\n+ return;\n+ }\n+\n+ // TODO: don't use sysprops for any of these! pass the args through to bootstrap...\n+ if (options.has(daemonizeOption)) {\n+ System.setProperty(\"es.foreground\", \"false\");\n+ }\n+ String pidFile = pidfileOption.value(options);\n+ if (Strings.isNullOrEmpty(pidFile) == false) {\n+ System.setProperty(\"es.pidfile\", pidFile);\n+ }\n+\n+ for (String property : propertyOption.values(options)) {\n+ String[] keyValue = property.split(\"=\", 2);\n+ if (keyValue.length != 2) {\n+ throw new UserError(ExitCodes.USAGE, \"Malformed elasticsearch setting, must be of the form key=value\");\n+ }\n+ String key = keyValue[0];\n+ if (key.startsWith(\"es.\") == false) {\n+ key = \"es.\" + key;\n+ }\n+ System.setProperty(key, keyValue[1]);\n+ }\n+ shouldRun = true;\n+ }\n+\n+ boolean shouldRun() {\n+ return shouldRun;\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/bootstrap/BootstrapCliParser.java", "status": "added" }, { "diff": "@@ -32,7 +32,7 @@ private Elasticsearch() {}\n /**\n * Main entry point for starting elasticsearch\n */\n- public static void main(String[] args) throws StartupError {\n+ public static void main(String[] args) throws Exception {\n try {\n Bootstrap.init(args);\n } catch (Throwable t) {", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java", "status": "modified" }, { "diff": "@@ -0,0 +1,112 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cli;\n+\n+import java.io.IOException;\n+import java.util.Arrays;\n+\n+import joptsimple.OptionException;\n+import joptsimple.OptionParser;\n+import joptsimple.OptionSet;\n+import joptsimple.OptionSpec;\n+import org.elasticsearch.common.SuppressForbidden;\n+\n+/**\n+ * An action to execute within a cli.\n+ */\n+public abstract class Command {\n+\n+ /** A description of the command, used in the help output. */\n+ protected final String description;\n+\n+ /** The option parser for this command. */\n+ protected final OptionParser parser = new OptionParser();\n+\n+ private final OptionSpec<Void> helpOption = parser.acceptsAll(Arrays.asList(\"h\", \"help\"), \"show help\").forHelp();\n+ private final OptionSpec<Void> silentOption = parser.acceptsAll(Arrays.asList(\"s\", \"silent\"), \"show minimal output\");\n+ private final OptionSpec<Void> verboseOption = parser.acceptsAll(Arrays.asList(\"v\", \"verbose\"), \"show verbose output\");\n+\n+ public Command(String description) {\n+ this.description = description;\n+ }\n+\n+ /** Parses options for this command from args and executes it. */\n+ public final int main(String[] args, Terminal terminal) throws Exception {\n+ try {\n+ mainWithoutErrorHandling(args, terminal);\n+ } catch (OptionException e) {\n+ printHelp(terminal);\n+ terminal.println(Terminal.Verbosity.SILENT, \"ERROR: \" + e.getMessage());\n+ return ExitCodes.USAGE;\n+ } catch (UserError e) {\n+ terminal.println(Terminal.Verbosity.SILENT, \"ERROR: \" + e.getMessage());\n+ return e.exitCode;\n+ }\n+ return ExitCodes.OK;\n+ }\n+\n+ /**\n+ * Executes the command, but all errors are thrown.\n+ */\n+ void mainWithoutErrorHandling(String[] args, Terminal terminal) throws Exception {\n+ final OptionSet options = parser.parse(args);\n+\n+ if (options.has(helpOption)) {\n+ printHelp(terminal);\n+ return;\n+ }\n+\n+ if (options.has(silentOption)) {\n+ if (options.has(verboseOption)) {\n+ // mutually exclusive, we can remove this with jopt-simple 5.0, which natively supports it\n+ throw new UserError(ExitCodes.USAGE, \"Cannot specify -s and -v together\");\n+ }\n+ terminal.setVerbosity(Terminal.Verbosity.SILENT);\n+ } else if (options.has(verboseOption)) {\n+ terminal.setVerbosity(Terminal.Verbosity.VERBOSE);\n+ } else {\n+ terminal.setVerbosity(Terminal.Verbosity.NORMAL);\n+ }\n+\n+ execute(terminal, options);\n+ }\n+\n+ /** Prints a help message for the command to the terminal. */\n+ private void printHelp(Terminal terminal) throws IOException {\n+ terminal.println(description);\n+ terminal.println(\"\");\n+ printAdditionalHelp(terminal);\n+ parser.printHelpOn(terminal.getWriter());\n+ }\n+\n+ /** Prints additional help information, specific to the command */\n+ protected void printAdditionalHelp(Terminal terminal) {}\n+\n+ @SuppressForbidden(reason = \"Allowed to exit explicitly from #main()\")\n+ protected static void exit(int status) {\n+ System.exit(status);\n+ }\n+\n+ /**\n+ * Executes this command.\n+ *\n+ * Any runtime user errors (like an input file that does not exist), should throw a {@link UserError}. */\n+ protected abstract void execute(Terminal terminal, OptionSet options) throws Exception;\n+}", "filename": "core/src/main/java/org/elasticsearch/cli/Command.java", "status": "added" }, { "diff": "@@ -0,0 +1,42 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cli;\n+\n+/**\n+ * POSIX exit codes.\n+ */\n+public class ExitCodes {\n+ public static final int OK = 0;\n+ public static final int USAGE = 64; /* command line usage error */\n+ public static final int DATA_ERROR = 65; /* data format error */\n+ public static final int NO_INPUT = 66; /* cannot open input */\n+ public static final int NO_USER = 67; /* addressee unknown */\n+ public static final int NO_HOST = 68; /* host name unknown */\n+ public static final int UNAVAILABLE = 69; /* service unavailable */\n+ public static final int CODE_ERROR = 70; /* internal software error */\n+ public static final int CANT_CREATE = 73; /* can't create (user) output file */\n+ public static final int IO_ERROR = 74; /* input/output error */\n+ public static final int TEMP_FAILURE = 75; /* temp failure; user is invited to retry */\n+ public static final int PROTOCOL = 76; /* remote error in protocol */\n+ public static final int NOPERM = 77; /* permission denied */\n+ public static final int CONFIG = 78; /* configuration error */\n+\n+ private ExitCodes() { /* no instance, just constants */ }\n+}", "filename": "core/src/main/java/org/elasticsearch/cli/ExitCodes.java", "status": "added" }, { "diff": "@@ -0,0 +1,71 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cli;\n+\n+import java.util.Arrays;\n+import java.util.LinkedHashMap;\n+import java.util.Map;\n+\n+import joptsimple.NonOptionArgumentSpec;\n+import joptsimple.OptionSet;\n+\n+/**\n+ * A cli tool which is made up of multiple subcommands.\n+ */\n+public class MultiCommand extends Command {\n+\n+ protected final Map<String, Command> subcommands = new LinkedHashMap<>();\n+\n+ private final NonOptionArgumentSpec<String> arguments = parser.nonOptions(\"command\");\n+\n+ public MultiCommand(String description) {\n+ super(description);\n+ parser.posixlyCorrect(true);\n+ }\n+\n+ @Override\n+ protected void printAdditionalHelp(Terminal terminal) {\n+ if (subcommands.isEmpty()) {\n+ throw new IllegalStateException(\"No subcommands configured\");\n+ }\n+ terminal.println(\"Commands\");\n+ terminal.println(\"--------\");\n+ for (Map.Entry<String, Command> subcommand : subcommands.entrySet()) {\n+ terminal.println(subcommand.getKey() + \" - \" + subcommand.getValue().description);\n+ }\n+ terminal.println(\"\");\n+ }\n+\n+ @Override\n+ protected void execute(Terminal terminal, OptionSet options) throws Exception {\n+ if (subcommands.isEmpty()) {\n+ throw new IllegalStateException(\"No subcommands configured\");\n+ }\n+ String[] args = arguments.values(options).toArray(new String[0]);\n+ if (args.length == 0) {\n+ throw new UserError(ExitCodes.USAGE, \"Missing command\");\n+ }\n+ Command subcommand = subcommands.get(args[0]);\n+ if (subcommand == null) {\n+ throw new UserError(ExitCodes.USAGE, \"Unknown command [\" + args[0] + \"]\");\n+ }\n+ subcommand.mainWithoutErrorHandling(Arrays.copyOfRange(args, 1, args.length), terminal);\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/cli/MultiCommand.java", "status": "added" }, { "diff": "@@ -22,7 +22,7 @@\n \n import org.apache.log4j.AppenderSkeleton;\n import org.apache.log4j.spi.LoggingEvent;\n-import org.elasticsearch.common.cli.Terminal;\n+import org.elasticsearch.cli.Terminal;\n \n /**\n * TerminalAppender logs event to Terminal.DEFAULT. It is used for example by the PluginCli.", "filename": "core/src/main/java/org/elasticsearch/common/logging/TerminalAppender.java", "status": "modified" }, { "diff": "@@ -23,7 +23,7 @@\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.common.Randomness;\n import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.cli.Terminal;\n+import org.elasticsearch.cli.Terminal;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;", "filename": "core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java", "status": "modified" }, { "diff": "@@ -19,16 +19,18 @@\n \n package org.elasticsearch.plugins;\n \n+import joptsimple.OptionSet;\n+import joptsimple.OptionSpec;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.Build;\n import org.elasticsearch.Version;\n import org.elasticsearch.bootstrap.JarHell;\n-import org.elasticsearch.common.cli.CliTool;\n-import org.elasticsearch.common.cli.Terminal;\n-import org.elasticsearch.common.cli.UserError;\n+import org.elasticsearch.cli.Command;\n+import org.elasticsearch.cli.ExitCodes;\n+import org.elasticsearch.cli.Terminal;\n+import org.elasticsearch.cli.UserError;\n import org.elasticsearch.common.hash.MessageDigests;\n import org.elasticsearch.common.io.FileSystemUtils;\n-import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n \n import java.io.BufferedReader;\n@@ -48,14 +50,15 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.HashSet;\n+import java.util.LinkedHashSet;\n import java.util.List;\n import java.util.Locale;\n import java.util.Set;\n import java.util.zip.ZipEntry;\n import java.util.zip.ZipInputStream;\n \n import static java.util.Collections.unmodifiableSet;\n-import static org.elasticsearch.common.cli.Terminal.Verbosity.VERBOSE;\n+import static org.elasticsearch.cli.Terminal.Verbosity.VERBOSE;\n import static org.elasticsearch.common.util.set.Sets.newHashSet;\n \n /**\n@@ -88,7 +91,7 @@\n * elasticsearch config directory, using the name of the plugin. If any files to be installed\n * already exist, they will be skipped.\n */\n-class InstallPluginCommand extends CliTool.Command {\n+class InstallPluginCommand extends Command {\n \n private static final String PROPERTY_SUPPORT_STAGING_URLS = \"es.plugins.staging\";\n \n@@ -98,7 +101,7 @@ class InstallPluginCommand extends CliTool.Command {\n \"lang-groovy\"));\n \n // TODO: make this a resource file generated by gradle\n- static final Set<String> OFFICIAL_PLUGINS = unmodifiableSet(newHashSet(\n+ static final Set<String> OFFICIAL_PLUGINS = unmodifiableSet(new LinkedHashSet<>(Arrays.asList(\n \"analysis-icu\",\n \"analysis-kuromoji\",\n \"analysis-phonetic\",\n@@ -117,35 +120,57 @@ class InstallPluginCommand extends CliTool.Command {\n \"repository-azure\",\n \"repository-hdfs\",\n \"repository-s3\",\n- \"store-smb\"));\n-\n- private final String pluginId;\n- private final boolean batch;\n+ \"store-smb\")));\n+\n+ private final Environment env;\n+ private final OptionSpec<Void> batchOption;\n+ private final OptionSpec<String> arguments;\n+\n+ InstallPluginCommand(Environment env) {\n+ super(\"Install a plugin\");\n+ this.env = env;\n+ this.batchOption = parser.acceptsAll(Arrays.asList(\"b\", \"batch\"),\n+ \"Enable batch mode explicitly, automatic confirmation of security permission\");\n+ this.arguments = parser.nonOptions(\"plugin id\");\n+ }\n \n- InstallPluginCommand(Terminal terminal, String pluginId, boolean batch) {\n- super(terminal);\n- this.pluginId = pluginId;\n- this.batch = batch;\n+ @Override\n+ protected void printAdditionalHelp(Terminal terminal) {\n+ terminal.println(\"The following official plugins may be installed by name:\");\n+ for (String plugin : OFFICIAL_PLUGINS) {\n+ terminal.println(\" \" + plugin);\n+ }\n+ terminal.println(\"\");\n }\n \n @Override\n- public CliTool.ExitStatus execute(Settings settings, Environment env) throws Exception {\n+ protected void execute(Terminal terminal, OptionSet options) throws Exception {\n+ // TODO: in jopt-simple 5.0 we can enforce a min/max number of positional args\n+ List<String> args = arguments.values(options);\n+ if (args.size() != 1) {\n+ throw new UserError(ExitCodes.USAGE, \"Must supply a single plugin id argument\");\n+ }\n+ String pluginId = args.get(0);\n+ boolean isBatch = options.has(batchOption) || System.console() == null;\n+ execute(terminal, pluginId, isBatch);\n+ }\n+\n+ // pkg private for testing\n+ void execute(Terminal terminal, String pluginId, boolean isBatch) throws Exception {\n \n // TODO: remove this leniency!! is it needed anymore?\n if (Files.exists(env.pluginsFile()) == false) {\n terminal.println(\"Plugins directory [\" + env.pluginsFile() + \"] does not exist. Creating...\");\n Files.createDirectory(env.pluginsFile());\n }\n \n- Path pluginZip = download(pluginId, env.tmpFile());\n+ Path pluginZip = download(terminal, pluginId, env.tmpFile());\n Path extractedZip = unzip(pluginZip, env.pluginsFile());\n- install(extractedZip, env);\n-\n- return CliTool.ExitStatus.OK;\n+ install(terminal, isBatch, extractedZip);\n }\n \n /** Downloads the plugin and returns the file it was downloaded to. */\n- private Path download(String pluginId, Path tmpDir) throws Exception {\n+ private Path download(Terminal terminal, String pluginId, Path tmpDir) throws Exception {\n if (OFFICIAL_PLUGINS.contains(pluginId)) {\n final String version = Version.CURRENT.toString();\n final String url;\n@@ -195,14 +220,14 @@ private Path downloadZipAndChecksum(String urlString, Path tmpDir) throws Except\n BufferedReader checksumReader = new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8));\n expectedChecksum = checksumReader.readLine();\n if (checksumReader.readLine() != null) {\n- throw new UserError(CliTool.ExitStatus.IO_ERROR, \"Invalid checksum file at \" + checksumUrl);\n+ throw new UserError(ExitCodes.IO_ERROR, \"Invalid checksum file at \" + checksumUrl);\n }\n }\n \n byte[] zipbytes = Files.readAllBytes(zip);\n String gotChecksum = MessageDigests.toHexString(MessageDigests.sha1().digest(zipbytes));\n if (expectedChecksum.equals(gotChecksum) == false) {\n- throw new UserError(CliTool.ExitStatus.IO_ERROR, \"SHA1 mismatch, expected \" + expectedChecksum + \" but got \" + gotChecksum);\n+ throw new UserError(ExitCodes.IO_ERROR, \"SHA1 mismatch, expected \" + expectedChecksum + \" but got \" + gotChecksum);\n }\n \n return zip;\n@@ -244,21 +269,21 @@ private Path unzip(Path zip, Path pluginsDir) throws IOException, UserError {\n Files.delete(zip);\n if (hasEsDir == false) {\n IOUtils.rm(target);\n- throw new UserError(CliTool.ExitStatus.DATA_ERROR, \"`elasticsearch` directory is missing in the plugin zip\");\n+ throw new UserError(ExitCodes.DATA_ERROR, \"`elasticsearch` directory is missing in the plugin zip\");\n }\n return target;\n }\n \n /** Load information about the plugin, and verify it can be installed with no errors. */\n- private PluginInfo verify(Path pluginRoot, Environment env) throws Exception {\n+ private PluginInfo verify(Terminal terminal, Path pluginRoot, boolean isBatch) throws Exception {\n // read and validate the plugin descriptor\n PluginInfo info = PluginInfo.readFromProperties(pluginRoot);\n terminal.println(VERBOSE, info.toString());\n \n // don't let luser install plugin as a module...\n // they might be unavoidably in maven central and are packaged up the same way)\n if (MODULES.contains(info.getName())) {\n- throw new UserError(CliTool.ExitStatus.USAGE, \"plugin '\" + info.getName() + \"' cannot be installed like this, it is a system module\");\n+ throw new UserError(ExitCodes.USAGE, \"plugin '\" + info.getName() + \"' cannot be installed like this, it is a system module\");\n }\n \n // check for jar hell before any copying\n@@ -268,7 +293,7 @@ private PluginInfo verify(Path pluginRoot, Environment env) throws Exception {\n // if it exists, confirm or warn the user\n Path policy = pluginRoot.resolve(PluginInfo.ES_PLUGIN_POLICY);\n if (Files.exists(policy)) {\n- PluginSecurity.readPolicy(policy, terminal, env, batch);\n+ PluginSecurity.readPolicy(policy, terminal, env, isBatch);\n }\n \n return info;\n@@ -305,16 +330,16 @@ private void jarHellCheck(Path candidate, Path pluginsDir, boolean isolated) thr\n * Installs the plugin from {@code tmpRoot} into the plugins dir.\n * If the plugin has a bin dir and/or a config dir, those are copied.\n */\n- private void install(Path tmpRoot, Environment env) throws Exception {\n+ private void install(Terminal terminal, boolean isBatch, Path tmpRoot) throws Exception {\n List<Path> deleteOnFailure = new ArrayList<>();\n deleteOnFailure.add(tmpRoot);\n \n try {\n- PluginInfo info = verify(tmpRoot, env);\n+ PluginInfo info = verify(terminal, tmpRoot, isBatch);\n \n final Path destination = env.pluginsFile().resolve(info.getName());\n if (Files.exists(destination)) {\n- throw new UserError(CliTool.ExitStatus.USAGE, \"plugin directory \" + destination.toAbsolutePath() + \" already exists. To update the plugin, uninstall it first using 'remove \" + info.getName() + \"' command\");\n+ throw new UserError(ExitCodes.USAGE, \"plugin directory \" + destination.toAbsolutePath() + \" already exists. To update the plugin, uninstall it first using 'remove \" + info.getName() + \"' command\");\n }\n \n Path tmpBinDir = tmpRoot.resolve(\"bin\");\n@@ -347,7 +372,7 @@ private void install(Path tmpRoot, Environment env) throws Exception {\n /** Copies the files from {@code tmpBinDir} into {@code destBinDir}, along with permissions from dest dirs parent. */\n private void installBin(PluginInfo info, Path tmpBinDir, Path destBinDir) throws Exception {\n if (Files.isDirectory(tmpBinDir) == false) {\n- throw new UserError(CliTool.ExitStatus.IO_ERROR, \"bin in plugin \" + info.getName() + \" is not a directory\");\n+ throw new UserError(ExitCodes.IO_ERROR, \"bin in plugin \" + info.getName() + \" is not a directory\");\n }\n Files.createDirectory(destBinDir);\n \n@@ -365,7 +390,7 @@ private void installBin(PluginInfo info, Path tmpBinDir, Path destBinDir) throws\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(tmpBinDir)) {\n for (Path srcFile : stream) {\n if (Files.isDirectory(srcFile)) {\n- throw new UserError(CliTool.ExitStatus.DATA_ERROR, \"Directories not allowed in bin dir for plugin \" + info.getName() + \", found \" + srcFile.getFileName());\n+ throw new UserError(ExitCodes.DATA_ERROR, \"Directories not allowed in bin dir for plugin \" + info.getName() + \", found \" + srcFile.getFileName());\n }\n \n Path destFile = destBinDir.resolve(tmpBinDir.relativize(srcFile));\n@@ -386,7 +411,7 @@ private void installBin(PluginInfo info, Path tmpBinDir, Path destBinDir) throws\n */\n private void installConfig(PluginInfo info, Path tmpConfigDir, Path destConfigDir) throws Exception {\n if (Files.isDirectory(tmpConfigDir) == false) {\n- throw new UserError(CliTool.ExitStatus.IO_ERROR, \"config in plugin \" + info.getName() + \" is not a directory\");\n+ throw new UserError(ExitCodes.IO_ERROR, \"config in plugin \" + info.getName() + \" is not a directory\");\n }\n \n // create the plugin's config dir \"if necessary\"\n@@ -395,7 +420,7 @@ private void installConfig(PluginInfo info, Path tmpConfigDir, Path destConfigDi\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(tmpConfigDir)) {\n for (Path srcFile : stream) {\n if (Files.isDirectory(srcFile)) {\n- throw new UserError(CliTool.ExitStatus.DATA_ERROR, \"Directories not allowed in config dir for plugin \" + info.getName());\n+ throw new UserError(ExitCodes.DATA_ERROR, \"Directories not allowed in config dir for plugin \" + info.getName());\n }\n \n Path destFile = destConfigDir.resolve(tmpConfigDir.relativize(srcFile));", "filename": "core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java", "status": "modified" }, { "diff": "@@ -24,22 +24,25 @@\n import java.nio.file.Files;\n import java.nio.file.Path;\n \n-import org.elasticsearch.common.cli.CliTool;\n-import org.elasticsearch.common.cli.Terminal;\n-import org.elasticsearch.common.settings.Settings;\n+import joptsimple.OptionSet;\n+import org.elasticsearch.cli.Command;\n+import org.elasticsearch.cli.Terminal;\n import org.elasticsearch.env.Environment;\n \n /**\n * A command for the plugin cli to list plugins installed in elasticsearch.\n */\n-class ListPluginsCommand extends CliTool.Command {\n+class ListPluginsCommand extends Command {\n \n- ListPluginsCommand(Terminal terminal) {\n- super(terminal);\n+ private final Environment env;\n+\n+ ListPluginsCommand(Environment env) {\n+ super(\"Lists installed elasticsearch plugins\");\n+ this.env = env;\n }\n \n @Override\n- public CliTool.ExitStatus execute(Settings settings, Environment env) throws Exception {\n+ protected void execute(Terminal terminal, OptionSet options) throws Exception {\n if (Files.exists(env.pluginsFile()) == false) {\n throw new IOException(\"Plugins directory missing: \" + env.pluginsFile());\n }\n@@ -50,7 +53,5 @@ public CliTool.ExitStatus execute(Settings settings, Environment env) throws Exc\n terminal.println(plugin.getFileName().toString());\n }\n }\n-\n- return CliTool.ExitStatus.OK;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/plugins/ListPluginsCommand.java", "status": "modified" }, { "diff": "@@ -19,106 +19,29 @@\n \n package org.elasticsearch.plugins;\n \n-import org.apache.commons.cli.CommandLine;\n-import org.elasticsearch.common.SuppressForbidden;\n-import org.elasticsearch.common.cli.CliTool;\n-import org.elasticsearch.common.cli.CliToolConfig;\n-import org.elasticsearch.common.cli.Terminal;\n-import org.elasticsearch.common.logging.LogConfigurator;\n+import org.apache.log4j.BasicConfigurator;\n+import org.apache.log4j.varia.NullAppender;\n+import org.elasticsearch.cli.MultiCommand;\n+import org.elasticsearch.cli.Terminal;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.node.internal.InternalSettingsPreparer;\n \n-import java.util.Locale;\n-\n-import static org.elasticsearch.common.cli.CliToolConfig.Builder.cmd;\n-import static org.elasticsearch.common.cli.CliToolConfig.Builder.option;\n-\n /**\n * A cli tool for adding, removing and listing plugins for elasticsearch.\n */\n-public class PluginCli extends CliTool {\n-\n- // commands\n- private static final String LIST_CMD_NAME = \"list\";\n- private static final String INSTALL_CMD_NAME = \"install\";\n- private static final String REMOVE_CMD_NAME = \"remove\";\n-\n- // usage config\n- private static final CliToolConfig.Cmd LIST_CMD = cmd(LIST_CMD_NAME, ListPluginsCommand.class).build();\n- private static final CliToolConfig.Cmd INSTALL_CMD = cmd(INSTALL_CMD_NAME, InstallPluginCommand.class)\n- .options(option(\"b\", \"batch\").required(false))\n- .build();\n- private static final CliToolConfig.Cmd REMOVE_CMD = cmd(REMOVE_CMD_NAME, RemovePluginCommand.class).build();\n-\n- static final CliToolConfig CONFIG = CliToolConfig.config(\"plugin\", PluginCli.class)\n- .cmds(LIST_CMD, INSTALL_CMD, REMOVE_CMD)\n- .build();\n-\n- public static void main(String[] args) throws Exception {\n- // initialize default for es.logger.level because we will not read the logging.yml\n- String loggerLevel = System.getProperty(\"es.logger.level\", \"INFO\");\n- // Set the appender for all potential log files to terminal so that other components that use the logger print out the\n- // same terminal.\n- // The reason for this is that the plugin cli cannot be configured with a file appender because when the plugin command is\n- // executed there is no way of knowing where the logfiles should be placed. For example, if elasticsearch\n- // is run as service then the logs should be at /var/log/elasticsearch but when started from the tar they should be at es.home/logs.\n- // Therefore we print to Terminal.\n- Environment env = InternalSettingsPreparer.prepareEnvironment(Settings.builder()\n- .put(\"appender.terminal.type\", \"terminal\")\n- .put(\"rootLogger\", \"${es.logger.level}, terminal\")\n- .put(\"es.logger.level\", loggerLevel)\n- .build(), Terminal.DEFAULT);\n- // configure but do not read the logging conf file\n- LogConfigurator.configure(env.settings(), false);\n- int status = new PluginCli(Terminal.DEFAULT).execute(args).status();\n- exit(status);\n- }\n-\n- @SuppressForbidden(reason = \"Allowed to exit explicitly from #main()\")\n- private static void exit(int status) {\n- System.exit(status);\n- }\n-\n- PluginCli(Terminal terminal) {\n- super(CONFIG, terminal);\n- }\n+public class PluginCli extends MultiCommand {\n \n- @Override\n- protected Command parse(String cmdName, CommandLine cli) throws Exception {\n- switch (cmdName.toLowerCase(Locale.ROOT)) {\n- case LIST_CMD_NAME:\n- return new ListPluginsCommand(terminal);\n- case INSTALL_CMD_NAME:\n- return parseInstallPluginCommand(cli);\n- case REMOVE_CMD_NAME:\n- return parseRemovePluginCommand(cli);\n- default:\n- assert false : \"can't get here as cmd name is validated before this method is called\";\n- return exitCmd(ExitStatus.USAGE);\n- }\n+ public PluginCli(Environment env) {\n+ super(\"A tool for managing installed elasticsearch plugins\");\n+ subcommands.put(\"list\", new ListPluginsCommand(env));\n+ subcommands.put(\"install\", new InstallPluginCommand(env));\n+ subcommands.put(\"remove\", new RemovePluginCommand(env));\n }\n \n- private Command parseInstallPluginCommand(CommandLine cli) {\n- String[] args = cli.getArgs();\n- if (args.length != 1) {\n- return exitCmd(ExitStatus.USAGE, terminal, \"Must supply a single plugin id argument\");\n- }\n-\n- boolean batch = System.console() == null;\n- if (cli.hasOption(\"b\")) {\n- batch = true;\n- }\n-\n- return new InstallPluginCommand(terminal, args[0], batch);\n- }\n-\n- private Command parseRemovePluginCommand(CommandLine cli) {\n- String[] args = cli.getArgs();\n- if (args.length != 1) {\n- return exitCmd(ExitStatus.USAGE, terminal, \"Must supply a single plugin name argument\");\n- }\n-\n- return new RemovePluginCommand(terminal, args[0]);\n+ public static void main(String[] args) throws Exception {\n+ BasicConfigurator.configure(new NullAppender());\n+ Environment env = InternalSettingsPreparer.prepareEnvironment(Settings.EMPTY, Terminal.DEFAULT);\n+ exit(new PluginCli(env).main(args, Terminal.DEFAULT));\n }\n }", "filename": "core/src/main/java/org/elasticsearch/plugins/PluginCli.java", "status": "modified" }, { "diff": "@@ -20,8 +20,8 @@\n package org.elasticsearch.plugins;\n \n import org.apache.lucene.util.IOUtils;\n-import org.elasticsearch.common.cli.Terminal;\n-import org.elasticsearch.common.cli.Terminal.Verbosity;\n+import org.elasticsearch.cli.Terminal;\n+import org.elasticsearch.cli.Terminal.Verbosity;\n import org.elasticsearch.env.Environment;\n \n import java.io.IOException;", "filename": "core/src/main/java/org/elasticsearch/plugins/PluginSecurity.java", "status": "modified" }, { "diff": "@@ -19,48 +19,63 @@\n \n package org.elasticsearch.plugins;\n \n-import org.apache.lucene.util.IOUtils;\n-import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.cli.CliTool;\n-import org.elasticsearch.common.cli.Terminal;\n-import org.elasticsearch.common.cli.UserError;\n-import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.env.Environment;\n-\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.nio.file.StandardCopyOption;\n import java.util.ArrayList;\n import java.util.List;\n \n-import static org.elasticsearch.common.cli.Terminal.Verbosity.VERBOSE;\n+import joptsimple.OptionSet;\n+import joptsimple.OptionSpec;\n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.cli.Command;\n+import org.elasticsearch.cli.ExitCodes;\n+import org.elasticsearch.cli.UserError;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.cli.Terminal;\n+import org.elasticsearch.env.Environment;\n+\n+import static org.elasticsearch.cli.Terminal.Verbosity.VERBOSE;\n \n /**\n * A command for the plugin cli to remove a plugin from elasticsearch.\n */\n-class RemovePluginCommand extends CliTool.Command {\n- private final String pluginName;\n+class RemovePluginCommand extends Command {\n+\n+ private final Environment env;\n+ private final OptionSpec<String> arguments;\n \n- public RemovePluginCommand(Terminal terminal, String pluginName) {\n- super(terminal);\n- this.pluginName = pluginName;\n+ RemovePluginCommand(Environment env) {\n+ super(\"Removes a plugin from elasticsearch\");\n+ this.env = env;\n+ this.arguments = parser.nonOptions(\"plugin name\");\n }\n \n @Override\n- public CliTool.ExitStatus execute(Settings settings, Environment env) throws Exception {\n+ protected void execute(Terminal terminal, OptionSet options) throws Exception {\n+ // TODO: in jopt-simple 5.0 we can enforce a min/max number of positional args\n+ List<String> args = arguments.values(options);\n+ if (args.size() != 1) {\n+ throw new UserError(ExitCodes.USAGE, \"Must supply a single plugin id argument\");\n+ }\n+ execute(terminal, args.get(0));\n+ }\n+\n+ // pkg private for testing\n+ void execute(Terminal terminal, String pluginName) throws Exception {\n terminal.println(\"-> Removing \" + Strings.coalesceToEmpty(pluginName) + \"...\");\n \n Path pluginDir = env.pluginsFile().resolve(pluginName);\n if (Files.exists(pluginDir) == false) {\n- throw new UserError(CliTool.ExitStatus.USAGE, \"Plugin \" + pluginName + \" not found. Run 'plugin list' to get list of installed plugins.\");\n+ throw new UserError(ExitCodes.USAGE, \"Plugin \" + pluginName + \" not found. Run 'plugin list' to get list of installed plugins.\");\n }\n \n List<Path> pluginPaths = new ArrayList<>();\n \n Path pluginBinDir = env.binFile().resolve(pluginName);\n if (Files.exists(pluginBinDir)) {\n if (Files.isDirectory(pluginBinDir) == false) {\n- throw new UserError(CliTool.ExitStatus.IO_ERROR, \"Bin dir for \" + pluginName + \" is not a directory\");\n+ throw new UserError(ExitCodes.IO_ERROR, \"Bin dir for \" + pluginName + \" is not a directory\");\n }\n pluginPaths.add(pluginBinDir);\n terminal.println(VERBOSE, \"Removing: \" + pluginBinDir);\n@@ -72,7 +87,5 @@ public CliTool.ExitStatus execute(Settings settings, Environment env) throws Exc\n pluginPaths.add(tmpPluginDir);\n \n IOUtils.rm(pluginPaths.toArray(new Path[pluginPaths.size()]));\n-\n- return CliTool.ExitStatus.OK;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/plugins/RemovePluginCommand.java", "status": "modified" }, { "diff": "@@ -0,0 +1,123 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cli;\n+\n+import joptsimple.OptionSet;\n+import org.elasticsearch.test.ESTestCase;\n+\n+public class CommandTests extends ESTestCase {\n+\n+ static class UserErrorCommand extends Command {\n+ UserErrorCommand() {\n+ super(\"Throws a user error\");\n+ }\n+ @Override\n+ protected void execute(Terminal terminal, OptionSet options) throws Exception {\n+ throw new UserError(ExitCodes.DATA_ERROR, \"Bad input\");\n+ }\n+ }\n+\n+ static class NoopCommand extends Command {\n+ boolean executed = false;\n+ NoopCommand() {\n+ super(\"Does nothing\");\n+ }\n+ @Override\n+ protected void execute(Terminal terminal, OptionSet options) throws Exception {\n+ terminal.println(\"Normal output\");\n+ terminal.println(Terminal.Verbosity.SILENT, \"Silent output\");\n+ terminal.println(Terminal.Verbosity.VERBOSE, \"Verbose output\");\n+ executed = true;\n+ }\n+ @Override\n+ protected void printAdditionalHelp(Terminal terminal) {\n+ terminal.println(\"Some extra help\");\n+ }\n+ }\n+\n+ public void testHelp() throws Exception {\n+ NoopCommand command = new NoopCommand();\n+ MockTerminal terminal = new MockTerminal();\n+ String[] args = {\"-h\"};\n+ int status = command.main(args, terminal);\n+ String output = terminal.getOutput();\n+ assertEquals(output, ExitCodes.OK, status);\n+ assertTrue(output, output.contains(\"Does nothing\"));\n+ assertTrue(output, output.contains(\"Some extra help\"));\n+ assertFalse(command.executed);\n+\n+ command = new NoopCommand();\n+ String[] args2 = {\"--help\"};\n+ status = command.main(args2, terminal);\n+ output = terminal.getOutput();\n+ assertEquals(output, ExitCodes.OK, status);\n+ assertTrue(output, output.contains(\"Does nothing\"));\n+ assertTrue(output, output.contains(\"Some extra help\"));\n+ assertFalse(command.executed);\n+ }\n+\n+ public void testVerbositySilentAndVerbose() throws Exception {\n+ MockTerminal terminal = new MockTerminal();\n+ NoopCommand command = new NoopCommand();\n+ String[] args = {\"-v\", \"-s\"};\n+ UserError e = expectThrows(UserError.class, () -> {\n+ command.mainWithoutErrorHandling(args, terminal);\n+ });\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Cannot specify -s and -v together\"));\n+ }\n+\n+ public void testSilentVerbosity() throws Exception {\n+ MockTerminal terminal = new MockTerminal();\n+ NoopCommand command = new NoopCommand();\n+ String[] args = {\"-s\"};\n+ command.main(args, terminal);\n+ String output = terminal.getOutput();\n+ assertTrue(output, output.contains(\"Silent output\"));\n+ }\n+\n+ public void testNormalVerbosity() throws Exception {\n+ MockTerminal terminal = new MockTerminal();\n+ terminal.setVerbosity(Terminal.Verbosity.SILENT);\n+ NoopCommand command = new NoopCommand();\n+ String[] args = {};\n+ command.main(args, terminal);\n+ String output = terminal.getOutput();\n+ assertTrue(output, output.contains(\"Normal output\"));\n+ }\n+\n+ public void testVerboseVerbosity() throws Exception {\n+ MockTerminal terminal = new MockTerminal();\n+ NoopCommand command = new NoopCommand();\n+ String[] args = {\"-v\"};\n+ command.main(args, terminal);\n+ String output = terminal.getOutput();\n+ assertTrue(output, output.contains(\"Verbose output\"));\n+ }\n+\n+ public void testUserError() throws Exception {\n+ MockTerminal terminal = new MockTerminal();\n+ UserErrorCommand command = new UserErrorCommand();\n+ String[] args = {};\n+ int status = command.main(args, terminal);\n+ String output = terminal.getOutput();\n+ assertEquals(output, ExitCodes.DATA_ERROR, status);\n+ assertTrue(output, output.contains(\"ERROR: Bad input\"));\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/cli/CommandTests.java", "status": "added" }, { "diff": "@@ -0,0 +1,105 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cli;\n+\n+import joptsimple.OptionSet;\n+import org.junit.Before;\n+\n+public class MultiCommandTests extends CommandTestCase {\n+\n+ static class DummyMultiCommand extends MultiCommand {\n+ DummyMultiCommand() {\n+ super(\"A dummy multi command\");\n+ }\n+ }\n+\n+ static class DummySubCommand extends Command {\n+ DummySubCommand() {\n+ super(\"A dummy subcommand\");\n+ }\n+ @Override\n+ protected void execute(Terminal terminal, OptionSet options) throws Exception {\n+ terminal.println(\"Arguments: \" + options.nonOptionArguments().toString());\n+ }\n+ }\n+\n+ DummyMultiCommand multiCommand;\n+\n+ @Before\n+ public void setupCommand() {\n+ multiCommand = new DummyMultiCommand();\n+ }\n+\n+ @Override\n+ protected Command newCommand() {\n+ return multiCommand;\n+ }\n+\n+ public void testNoCommandsConfigured() throws Exception {\n+ IllegalStateException e = expectThrows(IllegalStateException.class, () -> {\n+ execute();\n+ });\n+ assertEquals(\"No subcommands configured\", e.getMessage());\n+ }\n+\n+ public void testUnknownCommand() throws Exception {\n+ multiCommand.subcommands.put(\"something\", new DummySubCommand());\n+ UserError e = expectThrows(UserError.class, () -> {\n+ execute(\"somethingelse\");\n+ });\n+ assertEquals(ExitCodes.USAGE, e.exitCode);\n+ assertEquals(\"Unknown command [somethingelse]\", e.getMessage());\n+ }\n+\n+ public void testMissingCommand() throws Exception {\n+ multiCommand.subcommands.put(\"command1\", new DummySubCommand());\n+ UserError e = expectThrows(UserError.class, () -> {\n+ execute();\n+ });\n+ assertEquals(ExitCodes.USAGE, e.exitCode);\n+ assertEquals(\"Missing command\", e.getMessage());\n+ }\n+\n+ public void testHelp() throws Exception {\n+ multiCommand.subcommands.put(\"command1\", new DummySubCommand());\n+ multiCommand.subcommands.put(\"command2\", new DummySubCommand());\n+ execute(\"-h\");\n+ String output = terminal.getOutput();\n+ assertTrue(output, output.contains(\"command1\"));\n+ assertTrue(output, output.contains(\"command2\"));\n+ }\n+\n+ public void testSubcommandHelp() throws Exception {\n+ multiCommand.subcommands.put(\"command1\", new DummySubCommand());\n+ multiCommand.subcommands.put(\"command2\", new DummySubCommand());\n+ execute(\"command2\", \"-h\");\n+ String output = terminal.getOutput();\n+ assertFalse(output, output.contains(\"command1\"));\n+ assertTrue(output, output.contains(\"A dummy subcommand\"));\n+ }\n+\n+ public void testSubcommandArguments() throws Exception {\n+ multiCommand.subcommands.put(\"command1\", new DummySubCommand());\n+ execute(\"command1\", \"foo\", \"bar\");\n+ String output = terminal.getOutput();\n+ assertFalse(output, output.contains(\"command1\"));\n+ assertTrue(output, output.contains(\"Arguments: [foo, bar]\"));\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/cli/MultiCommandTests.java", "status": "added" } ] }
{ "body": "On 1.7.1. This may be related to https://github.com/elastic/elasticsearch/issues/15432 , but it is unclear in #15432 what the cause was. So I am filing a separate ticket on this.\n\nIn short, it looks like when snapshot restore is running, if the indices are closed while it is operating on a shard, it will leave the snapshot restore request in a STARTED state.\n\n```\n \"restore\" : {\n \"snapshots\" : [ {\n \"snapshot\" : \"snapshot_name\",\n \"repository\" : \"repository_name\",\n \"state\" : \"STARTED\",\n```\n\nAnd shards that are in INIT state as reported by the restore request:\n\n```\n{\n \"index\" : \"pricing_2015121502\",\n \"shard\" : 1,\n \"state\" : \"INIT\"\n}\n```\n\nSo that when the end user tries to kick off another restore, it will fail:\n\n```\nfailed to restore snapshot\norg.elasticsearch.snapshots.ConcurrentSnapshotExecutionException: [repository_name:snapshot_name] Restore process is already running in this cluster\n at org.elasticsearch.snapshots.RestoreService$1.execute(RestoreService.java:174)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:196)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:162)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nBecause only 1 snapshot restore request can be run at any point in time. \n\nIt will be a good idea for us to implement a check to prevent users from closing indices while the restore operation is working on those shards which will help prevent this type of issue.\n", "comments": [ { "body": "@ywelsch please could you take a look at this\n", "created_at": "2016-02-02T13:53:31Z" }, { "body": "I verified the issue on master. It is present on all ES versions.\n\nTo fix this, there are two possible ways:\n- Fail closing an index if there are shards of that index still being restored.\n- Always let the index close operation succeed but fail the restore process of the shards of that index. This is more in line with what we currently do for index deletes that happen during the snapshot restore process.\n\n@clintongormley wdyt?\n", "created_at": "2016-02-03T17:11:04Z" }, { "body": "> Fail closing an index if there are shards of that index still being restored.\n\nAt least for this particular use case, this option is desirable because they didn't realize that the restore is still in progress (would like the restore to proceed), and not because they want to cancel a specific restore.\n", "created_at": "2016-02-03T17:32:23Z" }, { "body": "@ppf2 what's your take on closing the index during snapshot (in contrast to restore)? Currently, the close operation succeeds but we fail the shards that are closed.\n", "created_at": "2016-02-04T07:58:09Z" }, { "body": "> Currently, the close operation succeeds but we fail the shards that are closed.\n\nFor this specific report from the field, the indices did close, but the restore status from the cluster state shows a shard apparently stuck in INIT state - reopening the index did not resolve the issue, but deleting the index helped get the restore out of that stuck state. So somehow the close operation did not successfully fail the shards, but left them the restore procedure in a started state, thinking that it is still initializing the restore on one of the shards.\n\n```\n{\n \"index\" : \"pricing_2015121502\",\n \"shard\" : 1,\n \"state\" : \"INIT\"\n}\n```\n", "created_at": "2016-02-04T18:19:12Z" }, { "body": "@ppf2 I think you misunderstood me, I'm not claiming that the close operation successfully failed the restore process. My question was related to the comment of yours saying that it would be preferable to fail closing an index if there are shards of that index still being restored. I wanted to know whether you think the same applies to the snapshot case. So my question was: Should closing an index fail while the index is being snapshotted?\n\nMy goal here is to find out what the expected behavior should be in the following situations:\n- Deleting an index during snapshot operation of that index\n- Closing an index during snapshot operation of that index\n- Deleting an index during restore operation of that index\n- Closing an index during restore operation of that index\n\n@clintongormley care to chime in?\n", "created_at": "2016-02-04T19:42:08Z" }, { "body": "@ywelsch Ah sorry misinterpreted :) \n\nI am thinking for deletions, we can cancel that snapshot operation (only on shards of that index) and perform necessary cleanup so that it doesn't end up with partial files in the repository. Same thing with restore, cancel the restore of the shards for that index, and do some cleanup in the target cluster. For closing, maybe we can just prevent them from doing so and allow the snapshot or restore operation to complete? But yah, let's get input from @clintongormley @imotov \n", "created_at": "2016-02-04T20:33:52Z" }, { "body": "Here are my initial thoughts:\n\nFor restore:\n- Closing an index that is being restored from a snapshot should fail the close operation. There is no good reason to do allow this, as closing an index that is being restored makes the index unusable (it cannot be recovered). What the user probably intends to do is to delete the index.\n- Deleting an index that is being restored should cancel the restore operation for that index and delete the index. Deleting an index that is being restored would be a simple approach to cancel the restore process for that index. An open point for me is whether the restore information returned after finishing the restore process should take the deleted index into account in its statistics and status.\n\nFor snapshot:\n- Closing an index that is being snapshotted should only succeed if snapshot is marked as partial. If so, index should be closed and snapshot should abort snapshotting the shards of that index. If shards of the deleted index have been already successfully snapshotted in the meantime, they will remain in the snapshot.\n- Deleting an index that is being snapshotted should only succeed if snapshot is marked as partial. If so, index should be deleted and snapshot should abort snapshotting the shards of that index. If shards of the deleted index have been already successfully snapshotted in the meantime, they will remain in the snapshot.\n\nNote: If the user does not want to wait for non-partial snapshot to finish before executing close/delete, he has to cancel the snapshot operation by deleting the snapshot in progress.\n", "created_at": "2016-02-05T15:53:39Z" }, { "body": "+1 on the restore part but I don't think we should abort the snapshot if an index is closed or deleted. That might lead to unexpected data loss. When users use partial snapshot they have certain set of partially available indices that they have in mind (basically indices that were partially available at the beginning of the snapshot). The proposed behavior arbitrary extends this to any index that happened to be copied over when the close or delete operation is performed, I think we should keep a lock on the shard and finish the snapshot operation before closing it.\n", "created_at": "2016-02-05T16:16:18Z" }, { "body": "@imotov Can you elaborate a bit more on the use cases for partial snapshots? The only one I have in mind is the following: As part of a general backup mechanism, hourly / daily snapshots are triggered (e.g. by cron job). The idea of partial snapshots would then be to snapshot as much as possible, even if some indices / shards are unavailable at that point in time. My proposed solution would be in line with that idea.\n", "created_at": "2016-02-05T16:25:33Z" }, { "body": "@ywelsch I see. I thought about partial snapshot as an emergency override that someone would deploy during a catastrophic event when they have a half working cluster and would like to make a snapshot of whatever they have left before taking some drastic recovery measures. During these highly stressful events the user might inadvertently close a half backed up index while thinking that they have a full copy.\n", "created_at": "2016-02-09T14:00:32Z" }, { "body": "@ywelsch I agree with your proposals for restore, but I think we should fail to close or delete an index while a snapshot is in progress (partial or otherwise).\n\nI realise that this might mean that the user is blocked from deleting an index while a background cron job is doing a snapshot. Perhaps the exception can include a message about how to cancel the current snapshot, and provide the snapshot_id etc required to perform the cancellation in a script friendly format. That way, if the delete/close happens in a script, the user can code around the blocked action.\n", "created_at": "2016-02-13T20:36:03Z" }, { "body": "@imotov @clintongormley: Let me just recapitulate a bit:\n\nFor restore, we all agree that:\n- deleting an index that is being restored should cancel the restore operation for that index and delete the index. Same behavior as before.\n- closing an index that is being restored from a snapshot should fail the close operation. The previous behavior was to force-close the index and render it thereby unusable as it could not be recovered anymore upon opening.\n\nFor snapshot, we disagree. Currently, deleting an index that is being snapshotted results in two outcomes, depending on whether the snapshot was started as partial or not:\n- If it was started as partial and an index is deleted during snapshotting, the index is deleted and the snapshot still completes but has the snapshot state partial.\n- If it was started as non-partial, the index is deleted and the snapshot completes with the snapshot state failed.\n\n**In both cases, the delete operation succeeds and takes priority over snapshotting.** We currently even have a test (`SharedClusterSnapshotRestoreIT.testDeleteIndexDuringSnapshot`) that checks exactly both of these scenarios and asserts the current behavior.\n\nIn light of that, let me make my case again:\n- My suggestion is to change the behavior only for the case where the snapshot is not started as partial. In that case I want snapshotting to take priority and the delete to fail as the user explicitly requested to have a full snapshot of all the specified stuff. I apply the same reasoning to close.\n- Your suggestions break way harder with the current way of doing snapshots. You're essentially saying that snapshots should always take priority (independently of whether started as partial or not) over deletes / closing. As I said before this does not play nicely with daily background snapshots (e.g. on cloud services). As for the disaster scenario outlined by @imotov, I don't think that doing snapshots **during** a disaster is exactly the scenario we want to optimize for. @clintongormley I'm missing any kind of argument why you think we should go this way.\n\nIn conclusion, the snapshot/delete-close case needs more discussion before I feel comfortable with implementing it. To not block too long on this discussion, I can in the meanwhile make a PR for the restore case.\n", "created_at": "2016-03-03T15:06:54Z" }, { "body": "@ywelsch has convinced me of his argument: \n- Deleting an index during restore should cancel the restore of that index\n- Closing an index during restore should fail the close request\n- Closing or deleting an index during a full snapshot should fail the close/delete request\n- Closing or deleting an index during a partial snapshot should succeed and mark the snapshot as partial.\n", "created_at": "2016-03-08T08:46:09Z" }, { "body": "Is there any possible way or steps to not close the index while performing restore? If closing is the only way then it makes user life miserable to one by one closing indexes in order to perform restore.", "created_at": "2022-06-13T12:25:12Z" } ], "number": 16321, "title": "Prevent index closing while snapshot is restoring the indices" }
{ "body": "Closes #16321\n", "number": 17021, "review_comments": [], "title": "Fail closing or deleting indices during a full snapshot" }
{ "commits": [ { "message": "Fail closing or deleting indices during a full snapshot\n\nCloses #16321" } ], "files": [ { "diff": "@@ -69,15 +69,17 @@ public static class Entry {\n private final State state;\n private final SnapshotId snapshotId;\n private final boolean includeGlobalState;\n+ private final boolean partial;\n private final ImmutableOpenMap<ShardId, ShardSnapshotStatus> shards;\n private final List<String> indices;\n private final ImmutableOpenMap<String, List<ShardId>> waitingIndices;\n private final long startTime;\n \n- public Entry(SnapshotId snapshotId, boolean includeGlobalState, State state, List<String> indices, long startTime, ImmutableOpenMap<ShardId, ShardSnapshotStatus> shards) {\n+ public Entry(SnapshotId snapshotId, boolean includeGlobalState, boolean partial, State state, List<String> indices, long startTime, ImmutableOpenMap<ShardId, ShardSnapshotStatus> shards) {\n this.state = state;\n this.snapshotId = snapshotId;\n this.includeGlobalState = includeGlobalState;\n+ this.partial = partial;\n this.indices = indices;\n this.startTime = startTime;\n if (shards == null) {\n@@ -90,7 +92,7 @@ public Entry(SnapshotId snapshotId, boolean includeGlobalState, State state, Lis\n }\n \n public Entry(Entry entry, State state, ImmutableOpenMap<ShardId, ShardSnapshotStatus> shards) {\n- this(entry.snapshotId, entry.includeGlobalState, state, entry.indices, entry.startTime, shards);\n+ this(entry.snapshotId, entry.includeGlobalState, entry.partial, state, entry.indices, entry.startTime, shards);\n }\n \n public Entry(Entry entry, ImmutableOpenMap<ShardId, ShardSnapshotStatus> shards) {\n@@ -121,6 +123,10 @@ public boolean includeGlobalState() {\n return includeGlobalState;\n }\n \n+ public boolean partial() {\n+ return partial;\n+ }\n+\n public long startTime() {\n return startTime;\n }\n@@ -133,6 +139,7 @@ public boolean equals(Object o) {\n Entry entry = (Entry) o;\n \n if (includeGlobalState != entry.includeGlobalState) return false;\n+ if (partial != entry.partial) return false;\n if (startTime != entry.startTime) return false;\n if (!indices.equals(entry.indices)) return false;\n if (!shards.equals(entry.shards)) return false;\n@@ -148,6 +155,7 @@ public int hashCode() {\n int result = state.hashCode();\n result = 31 * result + snapshotId.hashCode();\n result = 31 * result + (includeGlobalState ? 1 : 0);\n+ result = 31 * result + (partial ? 1 : 0);\n result = 31 * result + shards.hashCode();\n result = 31 * result + indices.hashCode();\n result = 31 * result + waitingIndices.hashCode();\n@@ -360,6 +368,7 @@ public SnapshotsInProgress readFrom(StreamInput in) throws IOException {\n for (int i = 0; i < entries.length; i++) {\n SnapshotId snapshotId = SnapshotId.readSnapshotId(in);\n boolean includeGlobalState = in.readBoolean();\n+ boolean partial = in.readBoolean();\n State state = State.fromValue(in.readByte());\n int indices = in.readVInt();\n List<String> indexBuilder = new ArrayList<>();\n@@ -375,7 +384,7 @@ public SnapshotsInProgress readFrom(StreamInput in) throws IOException {\n State shardState = State.fromValue(in.readByte());\n builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState));\n }\n- entries[i] = new Entry(snapshotId, includeGlobalState, state, Collections.unmodifiableList(indexBuilder), startTime, builder.build());\n+ entries[i] = new Entry(snapshotId, includeGlobalState, partial, state, Collections.unmodifiableList(indexBuilder), startTime, builder.build());\n }\n return new SnapshotsInProgress(entries);\n }\n@@ -386,6 +395,7 @@ public void writeTo(StreamOutput out) throws IOException {\n for (Entry entry : entries) {\n entry.snapshotId().writeTo(out);\n out.writeBoolean(entry.includeGlobalState());\n+ out.writeBoolean(entry.partial());\n out.writeByte(entry.state().value());\n out.writeVInt(entry.indices().size());\n for (String index : entry.indices()) {\n@@ -406,6 +416,7 @@ static final class Fields {\n static final XContentBuilderString SNAPSHOTS = new XContentBuilderString(\"snapshots\");\n static final XContentBuilderString SNAPSHOT = new XContentBuilderString(\"snapshot\");\n static final XContentBuilderString INCLUDE_GLOBAL_STATE = new XContentBuilderString(\"include_global_state\");\n+ static final XContentBuilderString PARTIAL = new XContentBuilderString(\"partial\");\n static final XContentBuilderString STATE = new XContentBuilderString(\"state\");\n static final XContentBuilderString INDICES = new XContentBuilderString(\"indices\");\n static final XContentBuilderString START_TIME_MILLIS = new XContentBuilderString(\"start_time_millis\");\n@@ -431,6 +442,7 @@ public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params p\n builder.field(Fields.REPOSITORY, entry.snapshotId().getRepository());\n builder.field(Fields.SNAPSHOT, entry.snapshotId().getSnapshot());\n builder.field(Fields.INCLUDE_GLOBAL_STATE, entry.includeGlobalState());\n+ builder.field(Fields.PARTIAL, entry.partial());\n builder.field(Fields.STATE, entry.state());\n builder.startArray(Fields.INDICES);\n {", "filename": "core/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java", "status": "modified" }, { "diff": "@@ -34,11 +34,12 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.snapshots.SnapshotsService;\n import org.elasticsearch.threadpool.ThreadPool;\n \n-import java.util.Arrays;\n-import java.util.Collection;\n+import java.util.Set;\n import java.util.concurrent.ScheduledFuture;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -67,7 +68,7 @@ public MetaDataDeleteIndexService(Settings settings, ThreadPool threadPool, Clus\n }\n \n public void deleteIndices(final Request request, final Listener userListener) {\n- Collection<String> indices = Arrays.asList(request.indices);\n+ Set<String> indices = Sets.newHashSet(request.indices);\n final DeleteIndexListener listener = new DeleteIndexListener(userListener);\n \n clusterService.submitStateUpdateTask(\"delete-index \" + indices, new ClusterStateUpdateTask(Priority.URGENT) {\n@@ -84,6 +85,9 @@ public void onFailure(String source, Throwable t) {\n \n @Override\n public ClusterState execute(final ClusterState currentState) {\n+ // Check if index deletion conflicts with any running snapshots\n+ SnapshotsService.checkIndexDeletion(currentState, indices);\n+\n RoutingTable.Builder routingTableBuilder = RoutingTable.builder(currentState.routingTable());\n MetaData.Builder metaDataBuilder = MetaData.builder(currentState.metaData());\n ClusterBlocks.Builder clusterBlocksBuilder = ClusterBlocks.builder().blocks(currentState.blocks());", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataDeleteIndexService.java", "status": "modified" }, { "diff": "@@ -19,14 +19,12 @@\n \n package org.elasticsearch.cluster.metadata;\n \n-import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.indices.close.CloseIndexClusterStateUpdateRequest;\n import org.elasticsearch.action.admin.indices.open.OpenIndexClusterStateUpdateRequest;\n import org.elasticsearch.cluster.AckedClusterStateUpdateTask;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n@@ -39,8 +37,9 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexNotFoundException;\n-import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.rest.RestStatus;\n+import org.elasticsearch.snapshots.RestoreService;\n+import org.elasticsearch.snapshots.SnapshotsService;\n \n import java.util.ArrayList;\n import java.util.Arrays;\n@@ -99,27 +98,10 @@ public ClusterState execute(ClusterState currentState) {\n return currentState;\n }\n \n- // Check if any of the indices to be closed are currently being restored from a snapshot and fail closing if such an index\n- // is found as closing an index that is being restored makes the index unusable (it cannot be recovered).\n- RestoreInProgress restore = currentState.custom(RestoreInProgress.TYPE);\n- if (restore != null) {\n- Set<String> indicesToFail = null;\n- for (RestoreInProgress.Entry entry : restore.entries()) {\n- for (ObjectObjectCursor<ShardId, RestoreInProgress.ShardRestoreStatus> shard : entry.shards()) {\n- if (!shard.value.state().completed()) {\n- if (indicesToClose.contains(shard.key.getIndexName())) {\n- if (indicesToFail == null) {\n- indicesToFail = new HashSet<>();\n- }\n- indicesToFail.add(shard.key.getIndexName());\n- }\n- }\n- }\n- }\n- if (indicesToFail != null) {\n- throw new IllegalArgumentException(\"Cannot close indices that are being restored: \" + indicesToFail);\n- }\n- }\n+ // Check if index closing conflicts with any running restores\n+ RestoreService.checkIndexClosing(currentState, indicesToClose);\n+ // Check if index closing conflicts with any running snapshots\n+ SnapshotsService.checkIndexClosing(currentState, indicesToClose);\n \n logger.info(\"closing indices [{}]\", indicesAsString);\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java", "status": "modified" }, { "diff": "@@ -774,6 +774,32 @@ private boolean failed(Snapshot snapshot, String index) {\n return false;\n }\n \n+ /**\n+ * Check if any of the indices to be closed are currently being restored from a snapshot and fail closing if such an index\n+ * is found as closing an index that is being restored makes the index unusable (it cannot be recovered).\n+ */\n+ public static void checkIndexClosing(ClusterState currentState, Set<String> indices) {\n+ RestoreInProgress restore = currentState.custom(RestoreInProgress.TYPE);\n+ if (restore != null) {\n+ Set<String> indicesToFail = null;\n+ for (RestoreInProgress.Entry entry : restore.entries()) {\n+ for (ObjectObjectCursor<ShardId, RestoreInProgress.ShardRestoreStatus> shard : entry.shards()) {\n+ if (!shard.value.state().completed()) {\n+ if (indices.contains(shard.key.getIndexName())) {\n+ if (indicesToFail == null) {\n+ indicesToFail = new HashSet<>();\n+ }\n+ indicesToFail.add(shard.key.getIndexName());\n+ }\n+ }\n+ }\n+ }\n+ if (indicesToFail != null) {\n+ throw new IllegalArgumentException(\"Cannot close indices that are being restored: \" + indicesToFail);\n+ }\n+ }\n+ }\n+\n /**\n * Adds restore completion listener\n * <p>", "filename": "core/src/main/java/org/elasticsearch/snapshots/RestoreService.java", "status": "modified" }, { "diff": "@@ -206,7 +206,7 @@ public ClusterState execute(ClusterState currentState) {\n // Store newSnapshot here to be processed in clusterStateProcessed\n List<String> indices = Arrays.asList(indexNameExpressionResolver.concreteIndices(currentState, request.indicesOptions(), request.indices()));\n logger.trace(\"[{}][{}] creating snapshot for indices [{}]\", request.repository(), request.name(), indices);\n- newSnapshot = new SnapshotsInProgress.Entry(snapshotId, request.includeGlobalState(), State.INIT, indices, System.currentTimeMillis(), null);\n+ newSnapshot = new SnapshotsInProgress.Entry(snapshotId, request.includeGlobalState(), request.partial(), State.INIT, indices, System.currentTimeMillis(), null);\n snapshots = new SnapshotsInProgress(newSnapshot);\n } else {\n // TODO: What should we do if a snapshot is already running?\n@@ -228,7 +228,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, final Cl\n threadPool.executor(ThreadPool.Names.SNAPSHOT).execute(new Runnable() {\n @Override\n public void run() {\n- beginSnapshot(newState, newSnapshot, request.partial, listener);\n+ beginSnapshot(newState, newSnapshot, request.partial(), listener);\n }\n });\n }\n@@ -1061,6 +1061,63 @@ private ImmutableOpenMap<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shard\n return builder.build();\n }\n \n+ /**\n+ * Check if any of the indices to be deleted are currently being snapshotted. Fail as deleting an index that is being\n+ * snapshotted (with partial == false) makes the snapshot fail.\n+ */\n+ public static void checkIndexDeletion(ClusterState currentState, Set<String> indices) {\n+ Set<String> indicesToFail = indicesToFailForCloseOrDeletion(currentState, indices);\n+ if (indicesToFail != null) {\n+ throw new IllegalArgumentException(\"Cannot delete indices that are being snapshotted: \" + indicesToFail +\n+ \". Try again after snapshot finishes or cancel the currently running snapshot.\");\n+ }\n+ }\n+\n+ /**\n+ * Check if any of the indices to be closed are currently being snapshotted. Fail as closing an index that is being\n+ * snapshotted (with partial == false) makes the snapshot fail.\n+ */\n+ public static void checkIndexClosing(ClusterState currentState, Set<String> indices) {\n+ Set<String> indicesToFail = indicesToFailForCloseOrDeletion(currentState, indices);\n+ if (indicesToFail != null) {\n+ throw new IllegalArgumentException(\"Cannot close indices that are being snapshotted: \" + indicesToFail +\n+ \". Try again after snapshot finishes or cancel the currently running snapshot.\");\n+ }\n+ }\n+\n+ private static Set<String> indicesToFailForCloseOrDeletion(ClusterState currentState, Set<String> indices) {\n+ SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n+ Set<String> indicesToFail = null;\n+ if (snapshots != null) {\n+ for (final SnapshotsInProgress.Entry entry : snapshots.entries()) {\n+ if (entry.partial() == false) {\n+ if (entry.state() == State.INIT) {\n+ for (String index : entry.indices()) {\n+ if (indices.contains(index)) {\n+ if (indicesToFail == null) {\n+ indicesToFail = new HashSet<>();\n+ }\n+ indicesToFail.add(index);\n+ }\n+ }\n+ } else {\n+ for (ObjectObjectCursor<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shard : entry.shards()) {\n+ if (!shard.value.state().completed()) {\n+ if (indices.contains(shard.key.getIndexName())) {\n+ if (indicesToFail == null) {\n+ indicesToFail = new HashSet<>();\n+ }\n+ indicesToFail.add(shard.key.getIndexName());\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ return indicesToFail;\n+ }\n+\n /**\n * Adds snapshot completion listener\n *\n@@ -1302,6 +1359,15 @@ public boolean includeGlobalState() {\n return includeGlobalState;\n }\n \n+ /**\n+ * Returns true if partial snapshot should be allowed\n+ *\n+ * @return true if partial snapshot should be allowed\n+ */\n+ public boolean partial() {\n+ return partial;\n+ }\n+\n /**\n * Returns master node timeout\n *", "filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" }, { "diff": "@@ -639,6 +639,7 @@ public ClusterState.Custom randomCreate(String name) {\n return new SnapshotsInProgress(new SnapshotsInProgress.Entry(\n new SnapshotId(randomName(\"repo\"), randomName(\"snap\")),\n randomBoolean(),\n+ randomBoolean(),\n SnapshotsInProgress.State.fromValue((byte) randomIntBetween(0, 6)),\n Collections.<String>emptyList(),\n Math.abs(randomLong()),", "filename": "core/src/test/java/org/elasticsearch/cluster/ClusterStateDiffIT.java", "status": "modified" }, { "diff": "@@ -1813,19 +1813,31 @@ public void testRecreateBlocksOnRestore() throws Exception {\n }\n }\n \n- public void testDeleteIndexDuringSnapshot() throws Exception {\n+ public void testCloseOrDeleteIndexDuringSnapshot() throws Exception {\n Client client = client();\n \n boolean allowPartial = randomBoolean();\n-\n logger.info(\"--> creating repository\");\n- assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+\n+ // only block on repo init if we have partial snapshot or we run into deadlock when acquiring shard locks for index deletion/closing\n+ boolean initBlocking = allowPartial || randomBoolean();\n+ if (initBlocking) {\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n .setType(\"mock\").setSettings(Settings.settingsBuilder()\n- .put(\"location\", randomRepoPath())\n- .put(\"compress\", randomBoolean())\n- .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)\n- .put(\"block_on_init\", true)\n+ .put(\"location\", randomRepoPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)\n+ .put(\"block_on_init\", true)\n ));\n+ } else {\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"mock\").setSettings(Settings.settingsBuilder()\n+ .put(\"location\", randomRepoPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)\n+ .put(\"block_on_data\", true)\n+ ));\n+ }\n \n createIndex(\"test-idx-1\", \"test-idx-2\", \"test-idx-3\");\n ensureGreen();\n@@ -1843,25 +1855,61 @@ public void testDeleteIndexDuringSnapshot() throws Exception {\n \n logger.info(\"--> snapshot allow partial {}\", allowPartial);\n ListenableActionFuture<CreateSnapshotResponse> future = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\")\n- .setIndices(\"test-idx-*\").setWaitForCompletion(true).setPartial(allowPartial).execute();\n+ .setIndices(\"test-idx-*\").setWaitForCompletion(true).setPartial(allowPartial).execute();\n logger.info(\"--> wait for block to kick in\");\n- waitForBlock(internalCluster().getMasterName(), \"test-repo\", TimeValue.timeValueMinutes(1));\n- logger.info(\"--> delete some indices while snapshot is running\");\n- client.admin().indices().prepareDelete(\"test-idx-1\", \"test-idx-2\").get();\n- logger.info(\"--> unblock running master node\");\n- unblockNode(internalCluster().getMasterName());\n+ if (initBlocking) {\n+ waitForBlock(internalCluster().getMasterName(), \"test-repo\", TimeValue.timeValueMinutes(1));\n+ } else {\n+ waitForBlockOnAnyDataNode(\"test-repo\", TimeValue.timeValueMinutes(1));\n+ }\n+ if (allowPartial) {\n+ // partial snapshots allow close / delete operations\n+ if (randomBoolean()) {\n+ logger.info(\"--> delete index while partial snapshot is running\");\n+ client.admin().indices().prepareDelete(\"test-idx-1\").get();\n+ } else {\n+ logger.info(\"--> close index while partial snapshot is running\");\n+ client.admin().indices().prepareClose(\"test-idx-1\").get();\n+ }\n+ } else {\n+ // non-partial snapshots do not allow close / delete operations on indices where snapshot has not been completed\n+ if (randomBoolean()) {\n+ try {\n+ logger.info(\"--> delete index while non-partial snapshot is running\");\n+ client.admin().indices().prepareDelete(\"test-idx-1\").get();\n+ fail(\"Expected deleting index to fail during snapshot\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"Cannot delete indices that are being snapshotted: [test-idx-1]\"));\n+ }\n+ } else {\n+ try {\n+ logger.info(\"--> close index while non-partial snapshot is running\");\n+ client.admin().indices().prepareClose(\"test-idx-1\").get();\n+ fail(\"Expected closing index to fail during snapshot\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"Cannot close indices that are being snapshotted: [test-idx-1]\"));\n+ }\n+ }\n+ }\n+ if (initBlocking) {\n+ logger.info(\"--> unblock running master node\");\n+ unblockNode(internalCluster().getMasterName());\n+ } else {\n+ logger.info(\"--> unblock all data nodes\");\n+ unblockAllDataNodes(\"test-repo\");\n+ }\n logger.info(\"--> waiting for snapshot to finish\");\n CreateSnapshotResponse createSnapshotResponse = future.get();\n \n if (allowPartial) {\n- logger.info(\"Deleted index during snapshot, but allow partial\");\n+ logger.info(\"Deleted/Closed index during snapshot, but allow partial\");\n assertThat(createSnapshotResponse.getSnapshotInfo().state(), equalTo((SnapshotState.PARTIAL)));\n assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n assertThat(createSnapshotResponse.getSnapshotInfo().failedShards(), greaterThan(0));\n assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), lessThan(createSnapshotResponse.getSnapshotInfo().totalShards()));\n } else {\n- logger.info(\"Deleted index during snapshot and doesn't allow partial\");\n- assertThat(createSnapshotResponse.getSnapshotInfo().state(), equalTo((SnapshotState.FAILED)));\n+ logger.info(\"Snapshot successfully completed\");\n+ assertThat(createSnapshotResponse.getSnapshotInfo().state(), equalTo((SnapshotState.SUCCESS)));\n }\n }\n \n@@ -1960,7 +2008,7 @@ public ClusterState execute(ClusterState currentState) {\n shards.put(new ShardId(\"test-idx\", \"_na_\", 1), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n shards.put(new ShardId(\"test-idx\", \"_na_\", 2), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n List<Entry> entries = new ArrayList<>();\n- entries.add(new Entry(new SnapshotId(\"test-repo\", \"test-snap\"), true, State.ABORTED, Collections.singletonList(\"test-idx\"), System.currentTimeMillis(), shards.build()));\n+ entries.add(new Entry(new SnapshotId(\"test-repo\", \"test-snap\"), true, false, State.ABORTED, Collections.singletonList(\"test-idx\"), System.currentTimeMillis(), shards.build()));\n return ClusterState.builder(currentState).putCustom(SnapshotsInProgress.TYPE, new SnapshotsInProgress(Collections.unmodifiableList(entries))).build();\n }\n ", "filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@ your application to Elasticsearch 5.0.\n * <<breaking_50_scripting>>\n * <<breaking_50_term_vectors>>\n * <<breaking_50_security>>\n+* <<breaking_50_snapshot_restore>>\n \n [[breaking_50_search_changes]]\n === Warmers\n@@ -844,3 +845,12 @@ distributed document frequencies anymore.\n \n The option to disable the security manager `--security.manager.enabled` has been removed. In order to grant special\n permissions to elasticsearch users must tweak the local Java Security Policy.\n+\n+[[breaking_50_snapshot_restore]]\n+=== Snapshot/Restore\n+\n+==== Closing / deleting indices while running snapshot\n+\n+In previous versions of Elasticsearch, closing or deleting an index during a full snapshot would make the snapshot fail. This is now changed\n+by failing the close/delete index request instead. The behavior for partial snapshots remains unchanged: Closing or deleting an index during\n+a partial snapshot is still possible. The snapshot result is then marked as partial.", "filename": "docs/reference/migration/migrate_5_0.asciidoc", "status": "modified" } ] }
{ "body": "Boolean params in the REST API are coerced from eg `\"true\"` to a real boolean. The `explain` parameter in `_analyze` API fails, eg:\n\n```\nGET /_analyze?pretty=1\n{\n \"attributes\" : [\n \"keyword\"\n ],\n \"filters\" : [\n \"snowball\"\n ],\n \"tokenizer\" : \"standard\",\n \"text\" : \"<text>This is troubled</text>\",\n \"explain\" : \"true\",\n \"char_filters\" : [\n \"html_strip\"\n ]\n}\n```\n\nfails with \n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"Unknown parameter [explain] in request body or parameter is of the wrong type[VALUE_STRING] \"\n }\n ],\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"Unknown parameter [explain] in request body or parameter is of the wrong type[VALUE_STRING] \"\n },\n \"status\": 400\n}\n```\n", "comments": [], "number": 16925, "title": "Explain param to Analyze API not coerced" }
{ "body": "Move some test methods from AnalylzeActionIT to RestAnalyzeActionTest\nAllow string explain param if it can parse\nFix wrong param name in rest-api-spec\n\nCloses #16925\n", "number": 16977, "review_comments": [ { "body": "Request vs Reuqest\n\nFunny. The typo comes from the old IT test :)\n", "created_at": "2016-03-07T05:30:37Z" }, { "body": "Does it need to be \"true\"? Or true works as well?\nI prefer the later.\n", "created_at": "2016-03-07T05:32:27Z" }, { "body": "Of course, both works. Next test case uses true.\nhttps://github.com/elastic/elasticsearch/pull/16977/files#diff-7b60c5abb786ffd29a08a764da0360efL88\n", "created_at": "2016-03-07T06:00:43Z" }, { "body": "Good catch! I will fix, and I found some unused import the old IT :)\n", "created_at": "2016-03-07T06:12:57Z" } ], "title": "Analysis : Allow string explain param in JSON" }
{ "commits": [], "files": [] }
{ "body": "**Elasticsearch version**: 2.2.0\n\n**JVM version**: 8.0.???\n\n**OS version**: CentOs\n\n**Description of the problem including expected versus actual behavior**:\n\n**Steps to reproduce**:\n\n```\nPUT t/t/_bulk\n{\"index\":{}}\n{\"a\":\"x\",\"b\":\"m\",\"v\":2}\n{\"index\":{}}\n{\"a\":\"x\",\"b\":\"n\",\"v\":3}\n{\"index\":{}}\n{\"a\":\"x\",\"b\":null,\"v\":5}\n{\"index\":{}}\n{\"a\":\"y\",\"b\":\"m\",\"v\":7}\n{\"index\":{}}\n{\"a\":\"y\",\"b\":\"n\",\"v\":11}\n{\"index\":{}}\n{\"a\":\"y\",\"b\":null,\"v\":13}\n{\"index\":{}}\n{\"a\":null,\"b\":\"m\",\"v\":17}\n{\"index\":{}}\n{\"a\":null,\"b\":\"n\",\"v\":19}\n{\"index\":{}}\n{\"a\":\"x\",\"b\":\"m\",\"v\":27}\n{\"index\":{}}\n{\"a\":\"y\",\"b\":\"n\",\"v\":39}\n\nGET t/t/_search\n{\n \"aggs\": {\n \"_match\": {\n \"aggs\": {\n \"_missing\": {\n \"aggs\": {\n \"v\": {\n \"extended_stats\": {\n \"field\": \"v\"\n }\n }\n },\n \"missing\": {\n \"field\": \"a\"\n }\n }\n },\n \"terms\": {\n \"field\": \"b\",\n \"size\": 10\n }\n }\n },\n \"size\": 0\n}\n```\n\nGet an error (and some results)\n\n```\n{\n \"took\":1,\n \"timed_out\":false,\n \"_shards\":{\n \"total\":3,\n \"successful\":1,\n \"failed\":2,\n \"failures\":[{\n \"shard\":0,\n \"index\":\"testing_e18cf562b420160225_183510\",\n \"node\":\"QlVbQTlvT5O_fe9x4LxvWQ\",\n \"reason\":{\n \"type\":\"array_index_out_of_bounds_exception\",\n \"reason\":null\n }\n }]\n }, \n // results clipped\n```\n\nThe `stats` aggregate does not have this problem.\n", "comments": [ { "body": "Our team has run into this issue too (ES 2.2, RedHat). In one of the failure modes the entire ES cluster operation is impaired and requires the cluster be fully stopped and restarted to recover. The workaround on our end has been to drop the batch size dramatically (to 25!).\n", "created_at": "2016-03-03T16:07:11Z" } ], "number": 16812, "title": "Unexpected `array_index_out_of_bounds_exception`" }
{ "body": "Closes #16812\n\nI think, this change was missed in #9544\n", "number": 16972, "review_comments": [], "title": "Build empty extended stats aggregation if no docs collected for bucket" }
{ "commits": [ { "message": "Build empty extended stats aggregation if no docs are collected for bucket #16812" } ], "files": [ { "diff": "@@ -167,14 +167,12 @@ private double variance(long owningBucketOrd) {\n }\n \n @Override\n- public InternalAggregation buildAggregation(long owningBucketOrdinal) {\n- if (valuesSource == null) {\n- return new InternalExtendedStats(name, 0, 0d, Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY, 0d, 0d, formatter,\n- pipelineAggregators(), metaData());\n+ public InternalAggregation buildAggregation(long bucket) {\n+ if (valuesSource == null || bucket >= counts.size()) {\n+ return buildEmptyAggregation();\n }\n- assert owningBucketOrdinal < counts.size();\n- return new InternalExtendedStats(name, counts.get(owningBucketOrdinal), sums.get(owningBucketOrdinal),\n- mins.get(owningBucketOrdinal), maxes.get(owningBucketOrdinal), sumOfSqrs.get(owningBucketOrdinal), sigma, formatter,\n+ return new InternalExtendedStats(name, counts.get(bucket), sums.get(bucket),\n+ mins.get(bucket), maxes.get(bucket), sumOfSqrs.get(bucket), sigma, formatter,\n pipelineAggregators(), metaData());\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java", "status": "modified" }, { "diff": "@@ -26,6 +26,8 @@\n import org.elasticsearch.script.groovy.GroovyPlugin;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.missing.Missing;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.metrics.AbstractNumericTestCase;\n import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;\n \n@@ -38,6 +40,8 @@\n import static org.elasticsearch.search.aggregations.AggregationBuilders.extendedStats;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.missing;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n@@ -498,6 +502,42 @@ public void testScriptMultiValuedWithParams() throws Exception {\n checkUpperLowerBounds(stats, sigma);\n }\n \n+ public void testEmptySubAggregation() {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(terms(\"value\").field(\"value\")\n+ .subAggregation(missing(\"values\").field(\"values\")\n+ .subAggregation(extendedStats(\"stats\").field(\"value\"))))\n+ .execute().actionGet();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"value\");\n+ assertThat(terms, notNullValue());\n+ assertThat(terms.getBuckets().size(), equalTo(10));\n+\n+ for (Terms.Bucket bucket : terms.getBuckets()) {\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+\n+ Missing missing = bucket.getAggregations().get(\"values\");\n+ assertThat(missing, notNullValue());\n+ assertThat(missing.getDocCount(), equalTo(0L));\n+\n+ ExtendedStats stats = missing.getAggregations().get(\"stats\");\n+ assertThat(stats, notNullValue());\n+ assertThat(stats.getName(), equalTo(\"stats\"));\n+ assertThat(stats.getSumOfSquares(), equalTo(0.0));\n+ assertThat(stats.getCount(), equalTo(0L));\n+ assertThat(stats.getSum(), equalTo(0.0));\n+ assertThat(stats.getMin(), equalTo(Double.POSITIVE_INFINITY));\n+ assertThat(stats.getMax(), equalTo(Double.NEGATIVE_INFINITY));\n+ assertThat(Double.isNaN(stats.getStdDeviation()), is(true));\n+ assertThat(Double.isNaN(stats.getAvg()), is(true));\n+ assertThat(Double.isNaN(stats.getStdDeviationBound(ExtendedStats.Bounds.UPPER)), is(true));\n+ assertThat(Double.isNaN(stats.getStdDeviationBound(ExtendedStats.Bounds.LOWER)), is(true));\n+ }\n+ }\n+\n \n private void assertShardExecutionState(SearchResponse response, int expectedFailures) throws Exception {\n ShardSearchFailure[] failures = response.getShardFailures();\n@@ -515,4 +555,4 @@ private void checkUpperLowerBounds(ExtendedStats stats, double sigma) {\n assertThat(stats.getStdDeviationBound(ExtendedStats.Bounds.LOWER), equalTo(stats.getAvg() - (stats.getStdDeviation() * sigma)));\n }\n \n-}\n\\ No newline at end of file\n+}", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/ExtendedStatsTests.java", "status": "modified" } ] }
{ "body": "On 1.7.1. This may be related to https://github.com/elastic/elasticsearch/issues/15432 , but it is unclear in #15432 what the cause was. So I am filing a separate ticket on this.\n\nIn short, it looks like when snapshot restore is running, if the indices are closed while it is operating on a shard, it will leave the snapshot restore request in a STARTED state.\n\n```\n \"restore\" : {\n \"snapshots\" : [ {\n \"snapshot\" : \"snapshot_name\",\n \"repository\" : \"repository_name\",\n \"state\" : \"STARTED\",\n```\n\nAnd shards that are in INIT state as reported by the restore request:\n\n```\n{\n \"index\" : \"pricing_2015121502\",\n \"shard\" : 1,\n \"state\" : \"INIT\"\n}\n```\n\nSo that when the end user tries to kick off another restore, it will fail:\n\n```\nfailed to restore snapshot\norg.elasticsearch.snapshots.ConcurrentSnapshotExecutionException: [repository_name:snapshot_name] Restore process is already running in this cluster\n at org.elasticsearch.snapshots.RestoreService$1.execute(RestoreService.java:174)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:196)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:162)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nBecause only 1 snapshot restore request can be run at any point in time. \n\nIt will be a good idea for us to implement a check to prevent users from closing indices while the restore operation is working on those shards which will help prevent this type of issue.\n", "comments": [ { "body": "@ywelsch please could you take a look at this\n", "created_at": "2016-02-02T13:53:31Z" }, { "body": "I verified the issue on master. It is present on all ES versions.\n\nTo fix this, there are two possible ways:\n- Fail closing an index if there are shards of that index still being restored.\n- Always let the index close operation succeed but fail the restore process of the shards of that index. This is more in line with what we currently do for index deletes that happen during the snapshot restore process.\n\n@clintongormley wdyt?\n", "created_at": "2016-02-03T17:11:04Z" }, { "body": "> Fail closing an index if there are shards of that index still being restored.\n\nAt least for this particular use case, this option is desirable because they didn't realize that the restore is still in progress (would like the restore to proceed), and not because they want to cancel a specific restore.\n", "created_at": "2016-02-03T17:32:23Z" }, { "body": "@ppf2 what's your take on closing the index during snapshot (in contrast to restore)? Currently, the close operation succeeds but we fail the shards that are closed.\n", "created_at": "2016-02-04T07:58:09Z" }, { "body": "> Currently, the close operation succeeds but we fail the shards that are closed.\n\nFor this specific report from the field, the indices did close, but the restore status from the cluster state shows a shard apparently stuck in INIT state - reopening the index did not resolve the issue, but deleting the index helped get the restore out of that stuck state. So somehow the close operation did not successfully fail the shards, but left them the restore procedure in a started state, thinking that it is still initializing the restore on one of the shards.\n\n```\n{\n \"index\" : \"pricing_2015121502\",\n \"shard\" : 1,\n \"state\" : \"INIT\"\n}\n```\n", "created_at": "2016-02-04T18:19:12Z" }, { "body": "@ppf2 I think you misunderstood me, I'm not claiming that the close operation successfully failed the restore process. My question was related to the comment of yours saying that it would be preferable to fail closing an index if there are shards of that index still being restored. I wanted to know whether you think the same applies to the snapshot case. So my question was: Should closing an index fail while the index is being snapshotted?\n\nMy goal here is to find out what the expected behavior should be in the following situations:\n- Deleting an index during snapshot operation of that index\n- Closing an index during snapshot operation of that index\n- Deleting an index during restore operation of that index\n- Closing an index during restore operation of that index\n\n@clintongormley care to chime in?\n", "created_at": "2016-02-04T19:42:08Z" }, { "body": "@ywelsch Ah sorry misinterpreted :) \n\nI am thinking for deletions, we can cancel that snapshot operation (only on shards of that index) and perform necessary cleanup so that it doesn't end up with partial files in the repository. Same thing with restore, cancel the restore of the shards for that index, and do some cleanup in the target cluster. For closing, maybe we can just prevent them from doing so and allow the snapshot or restore operation to complete? But yah, let's get input from @clintongormley @imotov \n", "created_at": "2016-02-04T20:33:52Z" }, { "body": "Here are my initial thoughts:\n\nFor restore:\n- Closing an index that is being restored from a snapshot should fail the close operation. There is no good reason to do allow this, as closing an index that is being restored makes the index unusable (it cannot be recovered). What the user probably intends to do is to delete the index.\n- Deleting an index that is being restored should cancel the restore operation for that index and delete the index. Deleting an index that is being restored would be a simple approach to cancel the restore process for that index. An open point for me is whether the restore information returned after finishing the restore process should take the deleted index into account in its statistics and status.\n\nFor snapshot:\n- Closing an index that is being snapshotted should only succeed if snapshot is marked as partial. If so, index should be closed and snapshot should abort snapshotting the shards of that index. If shards of the deleted index have been already successfully snapshotted in the meantime, they will remain in the snapshot.\n- Deleting an index that is being snapshotted should only succeed if snapshot is marked as partial. If so, index should be deleted and snapshot should abort snapshotting the shards of that index. If shards of the deleted index have been already successfully snapshotted in the meantime, they will remain in the snapshot.\n\nNote: If the user does not want to wait for non-partial snapshot to finish before executing close/delete, he has to cancel the snapshot operation by deleting the snapshot in progress.\n", "created_at": "2016-02-05T15:53:39Z" }, { "body": "+1 on the restore part but I don't think we should abort the snapshot if an index is closed or deleted. That might lead to unexpected data loss. When users use partial snapshot they have certain set of partially available indices that they have in mind (basically indices that were partially available at the beginning of the snapshot). The proposed behavior arbitrary extends this to any index that happened to be copied over when the close or delete operation is performed, I think we should keep a lock on the shard and finish the snapshot operation before closing it.\n", "created_at": "2016-02-05T16:16:18Z" }, { "body": "@imotov Can you elaborate a bit more on the use cases for partial snapshots? The only one I have in mind is the following: As part of a general backup mechanism, hourly / daily snapshots are triggered (e.g. by cron job). The idea of partial snapshots would then be to snapshot as much as possible, even if some indices / shards are unavailable at that point in time. My proposed solution would be in line with that idea.\n", "created_at": "2016-02-05T16:25:33Z" }, { "body": "@ywelsch I see. I thought about partial snapshot as an emergency override that someone would deploy during a catastrophic event when they have a half working cluster and would like to make a snapshot of whatever they have left before taking some drastic recovery measures. During these highly stressful events the user might inadvertently close a half backed up index while thinking that they have a full copy.\n", "created_at": "2016-02-09T14:00:32Z" }, { "body": "@ywelsch I agree with your proposals for restore, but I think we should fail to close or delete an index while a snapshot is in progress (partial or otherwise).\n\nI realise that this might mean that the user is blocked from deleting an index while a background cron job is doing a snapshot. Perhaps the exception can include a message about how to cancel the current snapshot, and provide the snapshot_id etc required to perform the cancellation in a script friendly format. That way, if the delete/close happens in a script, the user can code around the blocked action.\n", "created_at": "2016-02-13T20:36:03Z" }, { "body": "@imotov @clintongormley: Let me just recapitulate a bit:\n\nFor restore, we all agree that:\n- deleting an index that is being restored should cancel the restore operation for that index and delete the index. Same behavior as before.\n- closing an index that is being restored from a snapshot should fail the close operation. The previous behavior was to force-close the index and render it thereby unusable as it could not be recovered anymore upon opening.\n\nFor snapshot, we disagree. Currently, deleting an index that is being snapshotted results in two outcomes, depending on whether the snapshot was started as partial or not:\n- If it was started as partial and an index is deleted during snapshotting, the index is deleted and the snapshot still completes but has the snapshot state partial.\n- If it was started as non-partial, the index is deleted and the snapshot completes with the snapshot state failed.\n\n**In both cases, the delete operation succeeds and takes priority over snapshotting.** We currently even have a test (`SharedClusterSnapshotRestoreIT.testDeleteIndexDuringSnapshot`) that checks exactly both of these scenarios and asserts the current behavior.\n\nIn light of that, let me make my case again:\n- My suggestion is to change the behavior only for the case where the snapshot is not started as partial. In that case I want snapshotting to take priority and the delete to fail as the user explicitly requested to have a full snapshot of all the specified stuff. I apply the same reasoning to close.\n- Your suggestions break way harder with the current way of doing snapshots. You're essentially saying that snapshots should always take priority (independently of whether started as partial or not) over deletes / closing. As I said before this does not play nicely with daily background snapshots (e.g. on cloud services). As for the disaster scenario outlined by @imotov, I don't think that doing snapshots **during** a disaster is exactly the scenario we want to optimize for. @clintongormley I'm missing any kind of argument why you think we should go this way.\n\nIn conclusion, the snapshot/delete-close case needs more discussion before I feel comfortable with implementing it. To not block too long on this discussion, I can in the meanwhile make a PR for the restore case.\n", "created_at": "2016-03-03T15:06:54Z" }, { "body": "@ywelsch has convinced me of his argument: \n- Deleting an index during restore should cancel the restore of that index\n- Closing an index during restore should fail the close request\n- Closing or deleting an index during a full snapshot should fail the close/delete request\n- Closing or deleting an index during a partial snapshot should succeed and mark the snapshot as partial.\n", "created_at": "2016-03-08T08:46:09Z" }, { "body": "Is there any possible way or steps to not close the index while performing restore? If closing is the only way then it makes user life miserable to one by one closing indexes in order to perform restore.", "created_at": "2022-06-13T12:25:12Z" } ], "number": 16321, "title": "Prevent index closing while snapshot is restoring the indices" }
{ "body": "Closing an index that is being restored from a snapshot should fail the close operation. The previous behavior would close the index and the snapshot process would continue running indefinitely. \n\nRelates to #16321 \n", "number": 16933, "review_comments": [ { "body": "Why not use awaitBusy here?\n", "created_at": "2016-03-07T20:32:17Z" }, { "body": "good idea. I have switched to awaitBusy here.\n", "created_at": "2016-03-08T13:21:17Z" } ], "title": "Prevent closing index during snapshot restore" }
{ "commits": [ { "message": "Prevent closing index during snapshot restore\n\nCloses #16933" } ], "files": [ { "diff": "@@ -19,12 +19,14 @@\n \n package org.elasticsearch.cluster.metadata;\n \n+import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.indices.close.CloseIndexClusterStateUpdateRequest;\n import org.elasticsearch.action.admin.indices.open.OpenIndexClusterStateUpdateRequest;\n import org.elasticsearch.cluster.AckedClusterStateUpdateTask;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n@@ -37,11 +39,14 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.rest.RestStatus;\n \n import java.util.ArrayList;\n import java.util.Arrays;\n+import java.util.HashSet;\n import java.util.List;\n+import java.util.Set;\n \n /**\n * Service responsible for submitting open/close index requests\n@@ -78,7 +83,7 @@ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n \n @Override\n public ClusterState execute(ClusterState currentState) {\n- List<String> indicesToClose = new ArrayList<>();\n+ Set<String> indicesToClose = new HashSet<>();\n for (String index : request.indices()) {\n IndexMetaData indexMetaData = currentState.metaData().index(index);\n if (indexMetaData == null) {\n@@ -94,6 +99,28 @@ public ClusterState execute(ClusterState currentState) {\n return currentState;\n }\n \n+ // Check if any of the indices to be closed are currently being restored from a snapshot and fail closing if such an index\n+ // is found as closing an index that is being restored makes the index unusable (it cannot be recovered).\n+ RestoreInProgress restore = currentState.custom(RestoreInProgress.TYPE);\n+ if (restore != null) {\n+ Set<String> indicesToFail = null;\n+ for (RestoreInProgress.Entry entry : restore.entries()) {\n+ for (ObjectObjectCursor<ShardId, RestoreInProgress.ShardRestoreStatus> shard : entry.shards()) {\n+ if (!shard.value.state().completed()) {\n+ if (indicesToClose.contains(shard.key.getIndexName())) {\n+ if (indicesToFail == null) {\n+ indicesToFail = new HashSet<>();\n+ }\n+ indicesToFail.add(shard.key.getIndexName());\n+ }\n+ }\n+ }\n+ }\n+ if (indicesToFail != null) {\n+ throw new IllegalArgumentException(\"Cannot close indices that are being restored: \" + indicesToFail);\n+ }\n+ }\n+\n logger.info(\"closing indices [{}]\", indicesAsString);\n \n MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java", "status": "modified" }, { "diff": "@@ -137,6 +137,32 @@ public static String blockNodeWithIndex(String index) {\n return null;\n }\n \n+ public static void blockAllDataNodes(String repository) {\n+ for(RepositoriesService repositoriesService : internalCluster().getDataNodeInstances(RepositoriesService.class)) {\n+ ((MockRepository)repositoriesService.repository(repository)).blockOnDataFiles(true);\n+ }\n+ }\n+\n+ public static void unblockAllDataNodes(String repository) {\n+ for(RepositoriesService repositoriesService : internalCluster().getDataNodeInstances(RepositoriesService.class)) {\n+ ((MockRepository)repositoriesService.repository(repository)).unblock();\n+ }\n+ }\n+\n+ public void waitForBlockOnAnyDataNode(String repository, TimeValue timeout) throws InterruptedException {\n+ if (false == awaitBusy(() -> {\n+ for(RepositoriesService repositoriesService : internalCluster().getDataNodeInstances(RepositoriesService.class)) {\n+ MockRepository mockRepository = (MockRepository) repositoriesService.repository(repository);\n+ if (mockRepository.blocked()) {\n+ return true;\n+ }\n+ }\n+ return false;\n+ }, timeout.millis(), TimeUnit.MILLISECONDS)) {\n+ fail(\"Timeout waiting for repository block on any data node!!!\");\n+ }\n+ }\n+\n public static void unblockNode(String node) {\n ((MockRepository)internalCluster().getInstance(RepositoriesService.class, node).repository(\"test-repo\")).unblock();\n }", "filename": "core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java", "status": "modified" }, { "diff": "@@ -1865,6 +1865,66 @@ public void testDeleteIndexDuringSnapshot() throws Exception {\n }\n }\n \n+ public void testCloseIndexDuringRestore() throws Exception {\n+ Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"mock\").setSettings(Settings.settingsBuilder()\n+ .put(\"location\", randomRepoPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)\n+ ));\n+\n+ createIndex(\"test-idx-1\", \"test-idx-2\");\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data\");\n+ for (int i = 0; i < 100; i++) {\n+ index(\"test-idx-1\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ index(\"test-idx-2\", \"doc\", Integer.toString(i), \"foo\", \"baz\" + i);\n+ }\n+ refresh();\n+ assertThat(client.prepareSearch(\"test-idx-1\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n+ assertThat(client.prepareSearch(\"test-idx-2\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n+\n+ logger.info(\"--> snapshot\");\n+ assertThat(client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\")\n+ .setIndices(\"test-idx-*\").setWaitForCompletion(true).get().getSnapshotInfo().state(), equalTo(SnapshotState.SUCCESS));\n+\n+ logger.info(\"--> deleting indices before restoring\");\n+ assertAcked(client.admin().indices().prepareDelete(\"test-idx-*\").get());\n+\n+ blockAllDataNodes(\"test-repo\");\n+ logger.info(\"--> execution will be blocked on all data nodes\");\n+\n+ logger.info(\"--> start restore\");\n+ ListenableActionFuture<RestoreSnapshotResponse> restoreFut = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\")\n+ .setWaitForCompletion(true)\n+ .execute();\n+\n+ logger.info(\"--> waiting for block to kick in\");\n+ waitForBlockOnAnyDataNode(\"test-repo\", TimeValue.timeValueSeconds(60));\n+\n+ logger.info(\"--> close index while restore is running\");\n+ try {\n+ client.admin().indices().prepareClose(\"test-idx-1\").get();\n+ fail(\"Expected closing index to fail during restore\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"Cannot close indices that are being restored: [test-idx-1]\"));\n+ }\n+\n+ logger.info(\"--> unblocking all data nodes\");\n+ unblockAllDataNodes(\"test-repo\");\n+\n+ logger.info(\"--> wait for restore to finish\");\n+ RestoreSnapshotResponse restoreSnapshotResponse = restoreFut.get();\n+ logger.info(\"--> check that all shards were recovered\");\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().successfulShards(), greaterThan(0));\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().failedShards(), equalTo(0));\n+ }\n+\n public void testDeleteOrphanSnapshot() throws Exception {\n Client client = client();\n ", "filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java", "status": "modified" } ] }
{ "body": "Elastic version 2.2.0\n\n> http://localhost:9200/my_index/parent_mapping/\n\nI accidentally wrote *_has_parent *_instead of *_has_child *_and got NullPointerException.\n\n _Parent type does not have any parent of itself._\n\nWhen I wrote has_child instead of has_parent query works as expected.\n\nThis query causes exception\n\n```\n{\n \"size\": 5,\n \"query\": {\n \"bool\": {\n \"must\": [\n {\n \"query\": {\n \"has_parent\": {\n \"type\": \"child_mapping\",\n \"score_mode\": \"max\",\n \"query\": {\n \"match_all\": {}\n },\n \"inner_hits\": {\n \"from\": 0,\n \"size\": 5\n }\n }\n }\n },\n {\n \"query\": {\n \"match_all\": {}\n }\n }\n ]\n }\n },\n \"sort\": [\n {\n \"name\": \"asc\"\n }\n ]\n}\n\n```\n\nHere is the exception log I received:\n\n```\nRemoteTransportException[[Workit.Local][127.0.0.1:9300][indices:data/read/search[phase/query+fetch]]]; nested: NullPointerExceptio\nn;\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.getOrdinalMap(ParentChildIndexFieldData.java:566)\n at org.elasticsearch.index.query.HasChildQueryParser$LateParsingQuery.rewrite(HasChildQueryParser.java:259)\n at org.apache.lucene.search.ConstantScoreQuery.rewrite(ConstantScoreQuery.java:55)\n at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:252)\n at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:252)\n at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:836)\n at org.elasticsearch.search.internal.ContextIndexSearcher.rewrite(ContextIndexSearcher.java:81)\n at org.elasticsearch.search.internal.DefaultSearchContext.preProcess(DefaultSearchContext.java:232)\n at org.elasticsearch.search.query.QueryPhase.preProcess(QueryPhase.java:103)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:674)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:618)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:461)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchSer\nviceTransportAction.java:392)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchSer\nviceTransportAction.java:389)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2016-02-16 16:54:02,014][DEBUG][action.search.type ] [Workit.Local] All shards failed for phase: [query_fetch]\nRemoteTransportException[[Workit.Local][127.0.0.1:9300][indices:data/read/search[phase/query+fetch]]]; nested: NullPointerExceptio\nn;\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.getOrdinalMap(ParentChildIndexFieldData.java:566)\n at org.elasticsearch.index.query.HasChildQueryParser$LateParsingQuery.rewrite(HasChildQueryParser.java:259)\n at org.apache.lucene.search.ConstantScoreQuery.rewrite(ConstantScoreQuery.java:55)\n at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:252)\n at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:252)\n at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:836)\n at org.elasticsearch.search.internal.ContextIndexSearcher.rewrite(ContextIndexSearcher.java:81)\n at org.elasticsearch.search.internal.DefaultSearchContext.preProcess(DefaultSearchContext.java:232)\n at org.elasticsearch.search.query.QueryPhase.preProcess(QueryPhase.java:103)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:674)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:618)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:461)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchSer\nviceTransportAction.java:392)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchSer\nviceTransportAction.java:389)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2016-02-16 16:54:02,024][INFO ][rest.suppressed ] /my_index/type/_search Params: {index=my_index, type=type}\nFailed to execute phase [query_fetch], all shards failed; shardFailures {[F6pFRctSR4CSS1stcYFiyA][my_index][0]: RemoteTranspor\ntException[[Workit.Local][127.0.0.1:9300][indices:data/read/search[phase/query+fetch]]]; nested: NullPointerException; }\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAc\ntion.java:228)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTypeAction.ja\nva:174)\n at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:46)\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:821)\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:799)\n at org.elasticsearch.transport.TransportService$4.onFailure(TransportService.java:361)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: ; nested: NullPointerException;\n at org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:386)\n at org.elasticsearch.action.search.SearchPhaseExecutionException.guessRootCauses(SearchPhaseExecutionException.java:152)\n at org.elasticsearch.action.search.SearchPhaseExecutionException.getCause(SearchPhaseExecutionException.java:99)\n at java.lang.Throwable.printStackTrace(Throwable.java:665)\n at java.lang.Throwable.printStackTrace(Throwable.java:721)\n at org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:60)\n at org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)\n at org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)\n at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)\n at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)\n at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)\n at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)\n at org.apache.log4j.Category.callAppenders(Category.java:206)\n at org.apache.log4j.Category.forcedLog(Category.java:391)\n at org.apache.log4j.Category.log(Category.java:856)\n at org.elasticsearch.common.logging.log4j.Log4jESLogger.internalInfo(Log4jESLogger.java:125)\n at org.elasticsearch.common.logging.support.AbstractESLogger.info(AbstractESLogger.java:90)\n at org.elasticsearch.rest.BytesRestResponse.convert(BytesRestResponse.java:131)\n at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:96)\n at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:87)\n at org.elasticsearch.rest.action.support.RestActionListener.onFailure(RestActionListener.java:60)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.raiseEarlyFailure(TransportSearchTypeAct\nion.java:316)\n ... 10 more\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.getOrdinalMap(ParentChildIndexFieldData.java:566)\n at org.elasticsearch.index.query.HasChildQueryParser$LateParsingQuery.rewrite(HasChildQueryParser.java:259)\n at org.apache.lucene.search.ConstantScoreQuery.rewrite(ConstantScoreQuery.java:55)\n at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:252)\n at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:252)\n at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:836)\n at org.elasticsearch.search.internal.ContextIndexSearcher.rewrite(ContextIndexSearcher.java:81)\n at org.elasticsearch.search.internal.DefaultSearchContext.preProcess(DefaultSearchContext.java:232)\n at org.elasticsearch.search.query.QueryPhase.preProcess(QueryPhase.java:103)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:674)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:618)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:461)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchSer\nviceTransportAction.java:392)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchSer\nviceTransportAction.java:389)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n ... 3 more\n```\n", "comments": [ { "body": "Simple recreation:\n\n```\nPUT t\n{\n \"mappings\": {\n \"parent\": {},\n \"child\": {\n \"_parent\": {\n \"type\": \"parent\"\n }\n }\n }\n}\n\nPUT t/parent/1\n{}\n\nPUT t/child/2?parent=1\n{}\n\nGET _search\n{\n \"query\": {\n \"has_parent\": {\n \"parent_type\": \"child\",\n \"query\": {\"match_all\": {}}\n }\n }\n}\n```\n", "created_at": "2016-02-19T19:31:48Z" }, { "body": "This is also broken in master\n", "created_at": "2016-02-19T19:32:12Z" } ], "number": 16692, "title": "NullPointerException on hasParentQuery when parent does not exist" }
{ "body": "Closes #16692\n", "number": 16923, "review_comments": [], "title": "Check that parent_type in Has Parent Query has child types" }
{ "commits": [ { "message": "Check that parent_type in HasParent query has child types #16692" } ], "files": [ { "diff": "@@ -132,7 +132,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n }\n DocumentMapper parentDocMapper = context.getMapperService().documentMapper(type);\n if (parentDocMapper == null) {\n- throw new QueryShardException(context, \"[has_parent] query configured 'parent_type' [\" + type\n+ throw new QueryShardException(context, \"[\" + NAME + \"] query configured 'parent_type' [\" + type\n + \"] is not a valid type\");\n }\n \n@@ -152,49 +152,36 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n }\n }\n \n- Set<String> parentTypes = new HashSet<>(5);\n- parentTypes.add(parentDocMapper.type());\n+ Set<String> childTypes = new HashSet<>();\n ParentChildIndexFieldData parentChildIndexFieldData = null;\n for (DocumentMapper documentMapper : context.getMapperService().docMappers(false)) {\n ParentFieldMapper parentFieldMapper = documentMapper.parentFieldMapper();\n- if (parentFieldMapper.active()) {\n- DocumentMapper parentTypeDocumentMapper = context.getMapperService().documentMapper(parentFieldMapper.type());\n+ if (parentFieldMapper.active() && type.equals(parentFieldMapper.type())) {\n+ childTypes.add(documentMapper.type());\n parentChildIndexFieldData = context.getForField(parentFieldMapper.fieldType());\n- if (parentTypeDocumentMapper == null) {\n- // Only add this, if this parentFieldMapper (also a parent) isn't a child of another parent.\n- parentTypes.add(parentFieldMapper.type());\n- }\n }\n }\n- if (parentChildIndexFieldData == null) {\n- throw new QueryShardException(context, \"[has_parent] no _parent field configured\");\n+\n+ if (childTypes.isEmpty()) {\n+ throw new QueryShardException(context, \"[\" + NAME + \"] no child types found for type [\" + type + \"]\");\n }\n \n- Query parentTypeQuery = null;\n- if (parentTypes.size() == 1) {\n- DocumentMapper documentMapper = context.getMapperService().documentMapper(parentTypes.iterator().next());\n- if (documentMapper != null) {\n- parentTypeQuery = documentMapper.typeFilter();\n- }\n+ Query childrenQuery;\n+ if (childTypes.size() == 1) {\n+ DocumentMapper documentMapper = context.getMapperService().documentMapper(childTypes.iterator().next());\n+ childrenQuery = documentMapper.typeFilter();\n } else {\n- BooleanQuery.Builder parentsFilter = new BooleanQuery.Builder();\n- for (String parentTypeStr : parentTypes) {\n- DocumentMapper documentMapper = context.getMapperService().documentMapper(parentTypeStr);\n- if (documentMapper != null) {\n- parentsFilter.add(documentMapper.typeFilter(), BooleanClause.Occur.SHOULD);\n- }\n+ BooleanQuery.Builder childrenFilter = new BooleanQuery.Builder();\n+ for (String childrenTypeStr : childTypes) {\n+ DocumentMapper documentMapper = context.getMapperService().documentMapper(childrenTypeStr);\n+ childrenFilter.add(documentMapper.typeFilter(), BooleanClause.Occur.SHOULD);\n }\n- parentTypeQuery = parentsFilter.build();\n- }\n-\n- if (parentTypeQuery == null) {\n- return null;\n+ childrenQuery = childrenFilter.build();\n }\n \n // wrap the query with type query\n innerQuery = Queries.filtered(innerQuery, parentDocMapper.typeFilter());\n- Query childrenFilter = Queries.not(parentTypeQuery);\n- return new HasChildQueryBuilder.LateParsingQuery(childrenFilter,\n+ return new HasChildQueryBuilder.LateParsingQuery(childrenQuery,\n innerQuery,\n HasChildQueryBuilder.DEFAULT_MIN_CHILDREN,\n HasChildQueryBuilder.DEFAULT_MAX_CHILDREN,", "filename": "core/src/main/java/org/elasticsearch/index/query/HasParentQueryBuilder.java", "status": "modified" }, { "diff": "@@ -758,11 +758,11 @@ public void testParentChildQueriesCanHandleNoRelevantTypesInIndex() throws Excep\n assertNoFailures(response);\n assertThat(response.getHits().totalHits(), equalTo(0L));\n \n- response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasParentQuery(\"child\", matchQuery(\"text\", \"value\"))).get();\n+ response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasParentQuery(\"parent\", matchQuery(\"text\", \"value\"))).get();\n assertNoFailures(response);\n assertThat(response.getHits().totalHits(), equalTo(0L));\n \n- response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasParentQuery(\"child\", matchQuery(\"text\", \"value\")).score(true))\n+ response = client().prepareSearch(\"test\").setQuery(QueryBuilders.hasParentQuery(\"parent\", matchQuery(\"text\", \"value\")).score(true))\n .get();\n assertNoFailures(response);\n assertThat(response.getHits().totalHits(), equalTo(0L));\n@@ -1894,11 +1894,6 @@ public void testParentFieldToNonExistingType() {\n fail();\n } catch (SearchPhaseExecutionException e) {\n }\n-\n- SearchResponse response = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.hasParentQuery(\"parent\", matchAllQuery()))\n- .get();\n- assertHitCount(response, 0);\n }\n \n static HasChildQueryBuilder hasChildQuery(String type, QueryBuilder queryBuilder) {\n@@ -1927,4 +1922,17 @@ public void testHasChildInnerQueryType() {\n QueryBuilders.hasChildQuery(\"child-type\", new IdsQueryBuilder().addIds(\"child-id\"))).get();\n assertSearchHits(searchResponse, \"parent-id\");\n }\n+\n+ public void testParentWithoutChildTypes() {\n+ assertAcked(prepareCreate(\"test\").addMapping(\"parent\").addMapping(\"child\", \"_parent\", \"type=parent\"));\n+ ensureGreen();\n+\n+ try {\n+ client().prepareSearch(\"test\").setQuery(hasParentQuery(\"child\", matchAllQuery())).get();\n+ fail();\n+ } catch (SearchPhaseExecutionException e) {\n+ assertThat(e.status(), equalTo(RestStatus.BAD_REQUEST));\n+ assertThat(e.toString(), containsString(\"[has_parent] no child types found for type [child]\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java", "status": "modified" } ] }
{ "body": "This test was enabled after the completion of #11665, but sporadically fails with the following reproduce:\n\n`gradle :core:integTest -Dtests.seed=4A03B4FF87C5DACC -Dtests.class=org.elasticsearch.discovery.DiscoveryWithServiceDisruptionsIT -Dtests.method=\"testIndicesDeleted\" -Des.logger.level=WARN -Dtests.security.manager=true -Dtests.locale=he-IL -Dtests.timezone=Asia/Istanbul`\n", "comments": [], "number": 16890, "title": "DiscoveryWithServiceDisruptionsIT.testIndicesDeleted fails on master" }
{ "body": "In particular, this test ensures we don't restart the master node until\nwe know the index deletion has taken effect on master. This overcomes a\ncurrent known issue where a delete can return before cluster state\nchanges take effect.\n\nCloses #16890\n", "number": 16917, "review_comments": [ { "body": "this is usually a bad sign. We should use sleep anywhere. Sometimes it's needed but we try give all the utilities to make sure no one used it explicitly. In this case we have assert busy:\n\n```\n assertBusy(() -> {\n final ClusterState currState = internalCluster().clusterService(masterNode1).state();\n assertTrue(\"index not deleted\", currState.metaData().hasIndex(\"test\") == false && currState.status() == ClusterState.ClusterStateStatus.APPLIED);\n });\n```\n", "created_at": "2016-03-03T10:17:30Z" }, { "body": "I would reduce this timeout to 0 - we don't care - we check with assertBusy later for execution. Make it fast. Also reduce the publishing timeout by setting PUBLISH_TIMEOUT_SETTING to 0.\n", "created_at": "2016-03-03T10:23:14Z" }, { "body": "if we reduce the publish timeout to 0 (which will make the test faster), we need to use the same assertBusy technique on masterNode2 to make sure it has process the change as well.\n", "created_at": "2016-03-03T10:24:07Z" }, { "body": "Done\n", "created_at": "2016-03-03T15:27:08Z" }, { "body": "Thanks for this tip @bleskes, I didn't realize we already had an `assertBusy`\n", "created_at": "2016-03-03T15:27:54Z" }, { "body": "BTW, changing the `PUBLISH_TIMEOUT_SETTING` to 0 throws the following exception:\nFailedToCommitClusterStateException[timed out while waiting for enough masters to ack sent cluster state. [1] left]\n", "created_at": "2016-03-03T16:05:02Z" }, { "body": "can we comment on why we set the timeout to 0? something a long the lines that we know this will time out due to the partition and we are going to check manually when it is applied to master nodes only.\n", "created_at": "2016-03-09T08:11:23Z" }, { "body": "Done\n", "created_at": "2016-03-09T15:23:40Z" } ], "title": "Fixes the DiscoveryWithServiceDisruptionsIT#testIndicesDeleted test" }
{ "commits": [ { "message": "Fixes the DiscoveryWithServiceDisruptionsIT#testIndicesDeleted test\n\nIn particular, this test ensures we don't restart the master node until\nwe know the index deletion has taken effect on master and the master\neligible nodes.\n\nCloses #16890" } ], "files": [ { "diff": "@@ -177,13 +177,17 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n }\n \n private void configureUnicastCluster(int numberOfNodes, @Nullable int[] unicastHostsOrdinals, int minimumMasterNode) throws ExecutionException, InterruptedException {\n+ configureUnicastCluster(DEFAULT_SETTINGS, numberOfNodes, unicastHostsOrdinals, minimumMasterNode);\n+ }\n+\n+ private void configureUnicastCluster(Settings settings, int numberOfNodes, @Nullable int[] unicastHostsOrdinals, int minimumMasterNode) throws ExecutionException, InterruptedException {\n if (minimumMasterNode < 0) {\n minimumMasterNode = numberOfNodes / 2 + 1;\n }\n logger.info(\"---> configured unicast\");\n // TODO: Rarely use default settings form some of these\n Settings nodeSettings = Settings.builder()\n- .put(DEFAULT_SETTINGS)\n+ .put(settings)\n .put(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey(), minimumMasterNode)\n .build();\n \n@@ -196,7 +200,6 @@ private void configureUnicastCluster(int numberOfNodes, @Nullable int[] unicastH\n }\n }\n \n-\n /**\n * Test that no split brain occurs under partial network partition. See https://github.com/elastic/elasticsearch/issues/2488\n */\n@@ -1075,25 +1078,38 @@ public boolean clearData(String nodeName) {\n * Tests that indices are properly deleted even if there is a master transition in between.\n * Test for https://github.com/elastic/elasticsearch/issues/11665\n */\n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/16890\")\n public void testIndicesDeleted() throws Exception {\n- configureUnicastCluster(3, null, 2);\n+ final Settings settings = Settings.builder()\n+ .put(DEFAULT_SETTINGS)\n+ .put(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey(), \"0s\") // don't wait on isolated data node\n+ .put(DiscoverySettings.COMMIT_TIMEOUT_SETTING.getKey(), \"30s\") // wait till cluster state is committed\n+ .build();\n+ final String idxName = \"test\";\n+ configureUnicastCluster(settings, 3, null, 2);\n InternalTestCluster.Async<List<String>> masterNodes = internalCluster().startMasterOnlyNodesAsync(2);\n InternalTestCluster.Async<String> dataNode = internalCluster().startDataOnlyNodeAsync();\n dataNode.get();\n- masterNodes.get();\n+ final List<String> allMasterEligibleNodes = masterNodes.get();\n ensureStableCluster(3);\n assertAcked(prepareCreate(\"test\"));\n ensureYellow();\n \n- String masterNode1 = internalCluster().getMasterName();\n+ final String masterNode1 = internalCluster().getMasterName();\n NetworkPartition networkPartition = new NetworkUnresponsivePartition(masterNode1, dataNode.get(), getRandom());\n internalCluster().setDisruptionScheme(networkPartition);\n networkPartition.startDisrupting();\n- internalCluster().client(masterNode1).admin().indices().prepareDelete(\"test\").setTimeout(\"1s\").get();\n+ internalCluster().client(masterNode1).admin().indices().prepareDelete(idxName).setTimeout(\"0s\").get();\n+ // Don't restart the master node until we know the index deletion has taken effect on master and the master eligible node.\n+ assertBusy(() -> {\n+ for (String masterNode : allMasterEligibleNodes) {\n+ final ClusterState masterState = internalCluster().clusterService(masterNode).state();\n+ assertTrue(\"index not deleted on \" + masterNode, masterState.metaData().hasIndex(idxName) == false &&\n+ masterState.status() == ClusterState.ClusterStateStatus.APPLIED);\n+ }\n+ });\n internalCluster().restartNode(masterNode1, InternalTestCluster.EMPTY_CALLBACK);\n ensureYellow();\n- assertFalse(client().admin().indices().prepareExists(\"test\").get().isExists());\n+ assertFalse(client().admin().indices().prepareExists(idxName).get().isExists());\n }\n \n protected NetworkPartition addRandomPartition() {", "filename": "core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java", "status": "modified" } ] }
{ "body": "https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=ubuntu/487/console\netc\n", "comments": [ { "body": "I'll be fixing these soon but I'll use this ticket to silence them.\n", "created_at": "2016-03-02T13:20:09Z" } ], "number": 16906, "title": "Reindex's rest tests fail frequently in CI" }
{ "body": "If a task leaks from one test case to another that should fail the test. The tests should have some ability to wait for a task to complete.\n\nCloses #16906\n", "number": 16914, "review_comments": [ { "body": "Hrm, shouldn't deleting an index cause any tasks for that index to be cancelled, internally? Seems this would be something a user could hit, and that we simply did not correctly cleanup all resources when deleting an index?\n", "created_at": "2016-03-02T16:28:15Z" }, { "body": "Reindex will create the index its target index if it doesn't already exist. We could probably catch that the source index was removed before creating the new index but that seems like overkill.\n", "created_at": "2016-03-02T16:34:38Z" }, { "body": "I guess I have 2 concerns. First is a user deleting an index and it causing problems. We shouldn't let this happen; deleting an index should be allowed at any time and it shouldn't cause the user to get into a messed up state. Second is having this custom wait method, when it seems we should just be able to cancel all running tasks, akin to deleting all indices?\n", "created_at": "2016-03-02T16:52:58Z" }, { "body": "> First is a user deleting an index and it causing problems.\n\nIt'll cause the reindex request to fail, logging a warning. That seems fine.\n\n> cancel all running tasks\n\nI can try to cancel the tasks if you'd like but they don't always listen to a cancel request and cancellation is async. Reindex only checks the cancel flag in between requests, for example. If I cancel it and it's already started an index request that will create a new index then the index will still be created. \n\nSo I can cancel all the running tasks but I'd still have to check to see that they've killed themselves with the same timeout. I'd just have a different exception message - instead of \"timed out waiting for running tasks\" it'd be \"timed out waiting for tasks to die\".\n", "created_at": "2016-03-02T17:07:38Z" }, { "body": "> It'll cause the reindex request to fail, logging a warning. That seems fine.\n\nI'm still confused. How did this cause tests to fail then?\n\nAlso, why are we kicking off a task in a rest test but never waiting for it to complete? Seems like a reindex rest test succeeding should only happen if the task completed successfully?\n", "created_at": "2016-03-02T19:15:17Z" }, { "body": "> I'm still confused. How did this cause tests to fail then?\n\nAnother test tried to create an index with the same name.\n\n> Also, why are we kicking off a task in a rest test but never waiting for it to complete? Seems like a reindex rest test succeeding should only happen if the task completed successfully?\n\nThere is no \"please only return when this task has successfully completed\" API. It'd be a great thing to have one day! For now if you explicitly set `wait_for_completion=false` then it us up to the user to monitor the task. And we have a rest test that verifies that setting wait_for_completion=false returns the task id so the user can do just that. But it can't wait for the task to finish because we don't have that API.\n\nWe should build that API. And we will. But we don't have time to build it right now.\n\nAnother idea:\n\nInstead of waiting for all tasks to finish after each REST test we just fail the test if there are any tasks still running. And we add a new section that we can use to \"wait for the task I just started to finish\". When we have the API we can replace the section with the API.\n", "created_at": "2016-03-02T19:27:38Z" }, { "body": "What do you mean by a \"section\", something in the rest test language? If so then I'm in favor of that. But then we should fail hard in rest test cleanup if there are any running tasks (ie the test isn't correctly waiting like it should).\n", "created_at": "2016-03-02T20:20:55Z" }, { "body": "> What do you mean by a \"section\", something in the rest test language?\n\nYup. Looks like those things are called `ExecutableSection`s.\n\n> But then we should fail hard in rest test cleanup if there are any running tasks (ie the test isn't correctly waiting like it should).\n\nYeah, that.\n\n> > Instead of waiting for all tasks to finish after each REST test we just fail the test if there are any tasks still running.\n", "created_at": "2016-03-02T20:46:07Z" }, { "body": "Fails -> Logs?\n", "created_at": "2016-03-07T20:48:51Z" }, { "body": "I am not sure that it warrants \"WARN\" logging level. It's perfectly fine for some of the tasks to be running in a working cluster. This includes node and master fault detection pings for example. I feel that INFO level logging would be more appropriate here. \n", "created_at": "2016-03-07T20:55:28Z" }, { "body": "This is a Request and setters in requests don't typically have \"set\" prefix and return the request itself. I don't like this notation, but this is the notation that is used in most of the existing requests including this one. Check detailed setting above. \n", "created_at": "2016-03-07T21:00:45Z" }, { "body": "Somehow we need to distinguish \"background\" tasks for the cluster from those started by rest actions.\n", "created_at": "2016-03-07T21:02:53Z" }, { "body": "You probably meant `return task instanceof TestTask && super.match(task)`.\n", "created_at": "2016-03-07T21:05:22Z" }, { "body": "Did you mean `long timeoutTime = System.nanoTime() + timeout.nanos();`?\n", "created_at": "2016-03-07T21:07:29Z" }, { "body": "Documentation for nanoTime() is suggesting to use `System.nanoTime() - timeoutTime < 0` instead to avoid numerical overflow. \n", "created_at": "2016-03-07T21:14:15Z" }, { "body": "While I agree that we want to know who initiated a particular task I don't think that having a leaked background tasks is much better than to have a leaked REST task :) Maybe we should simply keep the WARN level but only log tasks that are running for more than 5 seconds?\n", "created_at": "2016-03-07T21:22:16Z" }, { "body": "I'm confused because I thought you were implying there are cluster level tasks always running in the background, that are simply part of the cluster operating normally. A leak is a leak, and we should catch and reject it.\n", "created_at": "2016-03-07T21:25:54Z" }, { "body": "Background tasks run in background periodically, but they shouldn't run for very long time. I guess what I am trying to say is if for some reason a fault detection ping or a stats operation started by a ClusterInfoService runs for 5 minutes it's probably also a leaks, even though it doesn't have corresponding REST request that started it.\n", "created_at": "2016-03-07T21:33:43Z" }, { "body": "I think waiting for 5 seconds is somewhat likely to hide an issue?\n\nIf we can be sure that a task is really a leak we should fail the test. If we can't we can't then we should log it. And maybe we should think about how we _can_ be sure.\n", "created_at": "2016-03-07T21:36:06Z" }, { "body": "I've been asked in the past to make getters and setters for new stuff on requests. Particularly when doing reindex. I really don't care either way. I used `shouldWaitForCompletion` rather than `getWaitForCompletion` because I thought that was more readable but I can make it whatever you think is more right.\n", "created_at": "2016-03-07T21:39:42Z" }, { "body": "Indeed I did. Fixing.\n", "created_at": "2016-03-07T21:40:28Z" }, { "body": "Makes sense to me. I'll do that.\n", "created_at": "2016-03-07T21:40:38Z" }, { "body": "I am fine with the new way, but then let's fix all other setters and getters to match this one. \n", "created_at": "2016-03-07T21:42:06Z" }, { "body": "Sure. That'll be safe to backport because we haven't released this feature yet anyway.\n", "created_at": "2016-03-07T21:44:43Z" }, { "body": "Sure.\n", "created_at": "2016-03-07T22:07:08Z" }, { "body": "Should I do more than just the one class, do you think?\n", "created_at": "2016-03-07T22:07:27Z" }, { "body": "I think everything inherited from BaseTasksRequest should be converted.\n", "created_at": "2016-03-07T22:51:46Z" }, { "body": "Why not go all the way to `public void setDetailed(boolean detailed) {` like you did for `setWaitForCompletion`?\n", "created_at": "2016-03-08T14:28:00Z" }, { "body": "Ah! Another point that I didn't think about when I made the setters. I think we're still going with returning `this` for the setters in requests. I'll just make the new one return `this` so they line up. If we don't like it we can change later - at least these won't clash.\n", "created_at": "2016-03-08T14:32:16Z" } ], "title": "Rest tests should be able to wait for running tasks" }
{ "commits": [ { "message": "Teach list tasks api to wait for tasks to finish\n\n_wait_for_completion defaults to false. If set to true then the API will\nwait for all the tasks that it finds to stop running before returning. You\ncan use the timeout parameter to prevent it from waiting forever. If you\ndon't set a timeout parameter it'll default to 30 seconds.\n\nAlso adds a log message to rest tests if any tasks overrun the test. This\nis just a log (instead of failing the test) because lots of tasks are run\nby the cluster on its own and they shouldn't cause the test to fail. Things\nlike fetching disk usage from the other nodes, for example.\n\nSwitches the request to getter/setter style methods as we're going that\nway in the Elasticsearch code base. Reindex is all getter/setter style.\n\nCloses #16906" } ], "files": [ { "diff": "@@ -53,12 +53,18 @@ public boolean match(Task task) {\n return super.match(task) && task instanceof CancellableTask;\n }\n \n- public CancelTasksRequest reason(String reason) {\n+ /**\n+ * Set the reason for canceling the task.\n+ */\n+ public CancelTasksRequest setReason(String reason) {\n this.reason = reason;\n return this;\n }\n \n- public String reason() {\n+ /**\n+ * The reason for canceling the task.\n+ */\n+ public String getReason() {\n return reason;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/CancelTasksRequest.java", "status": "modified" }, { "diff": "@@ -84,21 +84,21 @@ protected TaskInfo readTaskResponse(StreamInput in) throws IOException {\n }\n \n protected void processTasks(CancelTasksRequest request, Consumer<CancellableTask> operation) {\n- if (request.taskId().isSet() == false) {\n+ if (request.getTaskId().isSet() == false) {\n // we are only checking one task, we can optimize it\n- CancellableTask task = taskManager.getCancellableTask(request.taskId().getId());\n+ CancellableTask task = taskManager.getCancellableTask(request.getTaskId().getId());\n if (task != null) {\n if (request.match(task)) {\n operation.accept(task);\n } else {\n- throw new IllegalArgumentException(\"task [\" + request.taskId() + \"] doesn't support this operation\");\n+ throw new IllegalArgumentException(\"task [\" + request.getTaskId() + \"] doesn't support this operation\");\n }\n } else {\n- if (taskManager.getTask(request.taskId().getId()) != null) {\n+ if (taskManager.getTask(request.getTaskId().getId()) != null) {\n // The task exists, but doesn't support cancellation\n- throw new IllegalArgumentException(\"task [\" + request.taskId() + \"] doesn't support cancellation\");\n+ throw new IllegalArgumentException(\"task [\" + request.getTaskId() + \"] doesn't support cancellation\");\n } else {\n- throw new ResourceNotFoundException(\"task [{}] doesn't support cancellation\", request.taskId());\n+ throw new ResourceNotFoundException(\"task [{}] doesn't support cancellation\", request.getTaskId());\n }\n }\n } else {\n@@ -113,14 +113,14 @@ protected void processTasks(CancelTasksRequest request, Consumer<CancellableTask\n @Override\n protected synchronized TaskInfo taskOperation(CancelTasksRequest request, CancellableTask cancellableTask) {\n final BanLock banLock = new BanLock(nodes -> removeBanOnNodes(cancellableTask, nodes));\n- Set<String> childNodes = taskManager.cancel(cancellableTask, request.reason(), banLock::onTaskFinished);\n+ Set<String> childNodes = taskManager.cancel(cancellableTask, request.getReason(), banLock::onTaskFinished);\n if (childNodes != null) {\n if (childNodes.isEmpty()) {\n logger.trace(\"cancelling task {} with no children\", cancellableTask.getId());\n return cancellableTask.taskInfo(clusterService.localNode(), false);\n } else {\n logger.trace(\"cancelling task {} with children on nodes [{}]\", cancellableTask.getId(), childNodes);\n- setBanOnNodes(request.reason(), cancellableTask, childNodes, banLock);\n+ setBanOnNodes(request.getReason(), cancellableTask, childNodes, banLock);\n return cancellableTask.taskInfo(clusterService.localNode(), false);\n }\n } else {", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java", "status": "modified" }, { "diff": "@@ -31,31 +31,49 @@\n public class ListTasksRequest extends BaseTasksRequest<ListTasksRequest> {\n \n private boolean detailed = false;\n+ private boolean waitForCompletion = false;\n \n /**\n * Should the detailed task information be returned.\n */\n- public boolean detailed() {\n+ public boolean getDetailed() {\n return this.detailed;\n }\n \n /**\n * Should the detailed task information be returned.\n */\n- public ListTasksRequest detailed(boolean detailed) {\n+ public ListTasksRequest setDetailed(boolean detailed) {\n this.detailed = detailed;\n return this;\n }\n \n+ /**\n+ * Should this request wait for all found tasks to complete?\n+ */\n+ public boolean getWaitForCompletion() {\n+ return waitForCompletion;\n+ }\n+\n+ /**\n+ * Should this request wait for all found tasks to complete?\n+ */\n+ public ListTasksRequest setWaitForCompletion(boolean waitForCompletion) {\n+ this.waitForCompletion = waitForCompletion;\n+ return this;\n+ }\n+\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n detailed = in.readBoolean();\n+ waitForCompletion = in.readBoolean();\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeBoolean(detailed);\n+ out.writeBoolean(waitForCompletion);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksRequest.java", "status": "modified" }, { "diff": "@@ -35,7 +35,15 @@ public ListTasksRequestBuilder(ElasticsearchClient client, ListTasksAction actio\n * Should detailed task information be returned.\n */\n public ListTasksRequestBuilder setDetailed(boolean detailed) {\n- request.detailed(detailed);\n+ request.setDetailed(detailed);\n+ return this;\n+ }\n+\n+ /**\n+ * Should this request wait for all found tasks to complete?\n+ */\n+ public final ListTasksRequestBuilder setWaitForCompletion(boolean waitForCompletion) {\n+ request.setWaitForCompletion(waitForCompletion);\n return this;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksRequestBuilder.java", "status": "modified" }, { "diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.action.admin.cluster.node.tasks.list;\n \n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.ElasticsearchTimeoutException;\n import org.elasticsearch.action.FailedNodeException;\n import org.elasticsearch.action.TaskOperationFailure;\n import org.elasticsearch.action.support.ActionFilters;\n@@ -29,18 +31,24 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.tasks.Task;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n import java.io.IOException;\n-import java.util.Collection;\n import java.util.List;\n+import java.util.function.Consumer;\n+\n+import static org.elasticsearch.common.unit.TimeValue.timeValueMillis;\n+import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;\n \n /**\n *\n */\n public class TransportListTasksAction extends TransportTasksAction<Task, ListTasksRequest, ListTasksResponse, TaskInfo> {\n+ private static final TimeValue WAIT_FOR_COMPLETION_POLL = timeValueMillis(100);\n+ private static final TimeValue DEFAULT_WAIT_FOR_COMPLETION_TIMEOUT = timeValueSeconds(30);\n \n @Inject\n public TransportListTasksAction(Settings settings, ClusterName clusterName, ThreadPool threadPool, ClusterService clusterService, TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n@@ -59,7 +67,34 @@ protected TaskInfo readTaskResponse(StreamInput in) throws IOException {\n \n @Override\n protected TaskInfo taskOperation(ListTasksRequest request, Task task) {\n- return task.taskInfo(clusterService.localNode(), request.detailed());\n+ return task.taskInfo(clusterService.localNode(), request.getDetailed());\n+ }\n+\n+ @Override\n+ protected void processTasks(ListTasksRequest request, Consumer<Task> operation) {\n+ if (false == request.getWaitForCompletion()) {\n+ super.processTasks(request, operation);\n+ return;\n+ }\n+ // If we should wait for completion then we have to intercept every found task and wait for it to leave the manager.\n+ TimeValue timeout = request.getTimeout();\n+ if (timeout == null) {\n+ timeout = DEFAULT_WAIT_FOR_COMPLETION_TIMEOUT;\n+ }\n+ long timeoutTime = System.nanoTime() + timeout.nanos();\n+ super.processTasks(request, operation.andThen((Task t) -> {\n+ while (System.nanoTime() - timeoutTime < 0) {\n+ if (taskManager.getTask(t.getId()) == null) {\n+ return;\n+ }\n+ try {\n+ Thread.sleep(WAIT_FOR_COMPLETION_POLL.millis());\n+ } catch (InterruptedException e) {\n+ throw new ElasticsearchException(\"Interrupted waiting for completion of [{}]\", e, t);\n+ }\n+ }\n+ throw new ElasticsearchTimeoutException(\"Timed out waiting for completion of [{}]\", t);\n+ }));\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java", "status": "modified" }, { "diff": "@@ -71,24 +71,24 @@ public ActionRequestValidationException validate() {\n * Sets the list of action masks for the actions that should be returned\n */\n @SuppressWarnings(\"unchecked\")\n- public final Request actions(String... actions) {\n+ public final Request setActions(String... actions) {\n this.actions = actions;\n return (Request) this;\n }\n \n /**\n * Return the list of action masks for the actions that should be returned\n */\n- public String[] actions() {\n+ public String[] getActions() {\n return actions;\n }\n \n- public final String[] nodesIds() {\n+ public final String[] getNodesIds() {\n return nodesIds;\n }\n \n @SuppressWarnings(\"unchecked\")\n- public final Request nodesIds(String... nodesIds) {\n+ public final Request setNodesIds(String... nodesIds) {\n this.nodesIds = nodesIds;\n return (Request) this;\n }\n@@ -98,12 +98,12 @@ public final Request nodesIds(String... nodesIds) {\n *\n * By default tasks with any ids are returned.\n */\n- public TaskId taskId() {\n+ public TaskId getTaskId() {\n return taskId;\n }\n \n @SuppressWarnings(\"unchecked\")\n- public final Request taskId(TaskId taskId) {\n+ public final Request setTaskId(TaskId taskId) {\n this.taskId = taskId;\n return (Request) this;\n }\n@@ -112,29 +112,29 @@ public final Request taskId(TaskId taskId) {\n /**\n * Returns the parent task id that tasks should be filtered by\n */\n- public TaskId parentTaskId() {\n+ public TaskId getParentTaskId() {\n return parentTaskId;\n }\n \n @SuppressWarnings(\"unchecked\")\n- public Request parentTaskId(TaskId parentTaskId) {\n+ public Request setParentTaskId(TaskId parentTaskId) {\n this.parentTaskId = parentTaskId;\n return (Request) this;\n }\n \n \n- public TimeValue timeout() {\n+ public TimeValue getTimeout() {\n return this.timeout;\n }\n \n @SuppressWarnings(\"unchecked\")\n- public final Request timeout(TimeValue timeout) {\n+ public final Request setTimeout(TimeValue timeout) {\n this.timeout = timeout;\n return (Request) this;\n }\n \n @SuppressWarnings(\"unchecked\")\n- public final Request timeout(String timeout) {\n+ public final Request setTimeout(String timeout) {\n this.timeout = TimeValue.parseTimeValue(timeout, null, getClass().getSimpleName() + \".timeout\");\n return (Request) this;\n }\n@@ -162,11 +162,11 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n public boolean match(Task task) {\n- if (actions() != null && actions().length > 0 && Regex.simpleMatch(actions(), task.getAction()) == false) {\n+ if (getActions() != null && getActions().length > 0 && Regex.simpleMatch(getActions(), task.getAction()) == false) {\n return false;\n }\n- if (taskId().isSet() == false) {\n- if(taskId().getId() != task.getId()) {\n+ if (getTaskId().isSet() == false) {\n+ if(getTaskId().getId() != task.getId()) {\n return false;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksRequest.java", "status": "modified" }, { "diff": "@@ -35,19 +35,19 @@ protected TasksRequestBuilder(ElasticsearchClient client, Action<Request, Respon\n \n @SuppressWarnings(\"unchecked\")\n public final RequestBuilder setNodesIds(String... nodesIds) {\n- request.nodesIds(nodesIds);\n+ request.setNodesIds(nodesIds);\n return (RequestBuilder) this;\n }\n \n @SuppressWarnings(\"unchecked\")\n public final RequestBuilder setActions(String... actions) {\n- request.actions(actions);\n+ request.setActions(actions);\n return (RequestBuilder) this;\n }\n \n @SuppressWarnings(\"unchecked\")\n public final RequestBuilder setTimeout(TimeValue timeout) {\n- request.timeout(timeout);\n+ request.setTimeout(timeout);\n return (RequestBuilder) this;\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/support/tasks/TasksRequestBuilder.java", "status": "modified" }, { "diff": "@@ -124,25 +124,25 @@ protected String[] filterNodeIds(DiscoveryNodes nodes, String[] nodesIds) {\n }\n \n protected String[] resolveNodes(TasksRequest request, ClusterState clusterState) {\n- if (request.taskId().isSet()) {\n- return clusterState.nodes().resolveNodesIds(request.nodesIds());\n+ if (request.getTaskId().isSet()) {\n+ return clusterState.nodes().resolveNodesIds(request.getNodesIds());\n } else {\n- return new String[]{request.taskId().getNodeId()};\n+ return new String[]{request.getTaskId().getNodeId()};\n }\n }\n \n protected void processTasks(TasksRequest request, Consumer<OperationTask> operation) {\n- if (request.taskId().isSet() == false) {\n+ if (request.getTaskId().isSet() == false) {\n // we are only checking one task, we can optimize it\n- Task task = taskManager.getTask(request.taskId().getId());\n+ Task task = taskManager.getTask(request.getTaskId().getId());\n if (task != null) {\n if (request.match(task)) {\n operation.accept((OperationTask) task);\n } else {\n- throw new ResourceNotFoundException(\"task [{}] doesn't support this operation\", request.taskId());\n+ throw new ResourceNotFoundException(\"task [{}] doesn't support this operation\", request.getTaskId());\n }\n } else {\n- throw new ResourceNotFoundException(\"task [{}] is missing\", request.taskId());\n+ throw new ResourceNotFoundException(\"task [{}] is missing\", request.getTaskId());\n }\n } else {\n for (Task task : taskManager.getTasks().values()) {\n@@ -224,8 +224,8 @@ private void start() {\n }\n } else {\n TransportRequestOptions.Builder builder = TransportRequestOptions.builder();\n- if (request.timeout() != null) {\n- builder.withTimeout(request.timeout());\n+ if (request.getTimeout() != null) {\n+ builder.withTimeout(request.getTimeout());\n }\n builder.withCompress(transportCompress());\n for (int i = 0; i < nodesIds.length; i++) {", "filename": "core/src/main/java/org/elasticsearch/action/support/tasks/TransportTasksAction.java", "status": "modified" }, { "diff": "@@ -52,10 +52,10 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n TaskId parentTaskId = new TaskId(request.param(\"parent_task_id\"));\n \n CancelTasksRequest cancelTasksRequest = new CancelTasksRequest();\n- cancelTasksRequest.taskId(taskId);\n- cancelTasksRequest.nodesIds(nodesIds);\n- cancelTasksRequest.actions(actions);\n- cancelTasksRequest.parentTaskId(parentTaskId);\n+ cancelTasksRequest.setTaskId(taskId);\n+ cancelTasksRequest.setNodesIds(nodesIds);\n+ cancelTasksRequest.setActions(actions);\n+ cancelTasksRequest.setParentTaskId(parentTaskId);\n client.admin().cluster().cancelTasks(cancelTasksRequest, new RestToXContentListener<>(channel));\n }\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/tasks/RestCancelTasksAction.java", "status": "modified" }, { "diff": "@@ -50,13 +50,15 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n TaskId taskId = new TaskId(request.param(\"taskId\"));\n String[] actions = Strings.splitStringByCommaToArray(request.param(\"actions\"));\n TaskId parentTaskId = new TaskId(request.param(\"parent_task_id\"));\n+ boolean waitForCompletion = request.paramAsBoolean(\"wait_for_completion\", false);\n \n ListTasksRequest listTasksRequest = new ListTasksRequest();\n- listTasksRequest.taskId(taskId);\n- listTasksRequest.nodesIds(nodesIds);\n- listTasksRequest.detailed(detailed);\n- listTasksRequest.actions(actions);\n- listTasksRequest.parentTaskId(parentTaskId);\n+ listTasksRequest.setTaskId(taskId);\n+ listTasksRequest.setNodesIds(nodesIds);\n+ listTasksRequest.setDetailed(detailed);\n+ listTasksRequest.setActions(actions);\n+ listTasksRequest.setParentTaskId(parentTaskId);\n+ listTasksRequest.setWaitForCompletion(waitForCompletion);\n client.admin().cluster().listTasks(listTasksRequest, new RestToXContentListener<>(channel));\n }\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/tasks/RestListTasksAction.java", "status": "modified" }, { "diff": "@@ -237,8 +237,8 @@ public void onFailure(Throwable e) {\n \n // Cancel main task\n CancelTasksRequest request = new CancelTasksRequest();\n- request.reason(\"Testing Cancellation\");\n- request.taskId(new TaskId(testNodes[0].discoveryNode.getId(), mainTask.getId()));\n+ request.setReason(\"Testing Cancellation\");\n+ request.setTaskId(new TaskId(testNodes[0].discoveryNode.getId(), mainTask.getId()));\n // And send the cancellation request to a random node\n CancelTasksResponse response = testNodes[randomIntBetween(0, testNodes.length - 1)].transportCancelTasksAction.execute(request)\n .get();\n@@ -270,7 +270,7 @@ public void onFailure(Throwable e) {\n \n // Make sure that tasks are no longer running\n ListTasksResponse listTasksResponse = testNodes[randomIntBetween(0, testNodes.length - 1)]\n- .transportListTasksAction.execute(new ListTasksRequest().taskId(\n+ .transportListTasksAction.execute(new ListTasksRequest().setTaskId(\n new TaskId(testNodes[0].discoveryNode.getId(), mainTask.getId()))).get();\n assertEquals(0, listTasksResponse.getTasks().size());\n \n@@ -313,7 +313,7 @@ public void onFailure(Throwable e) {\n \n // Make sure that tasks are running\n ListTasksResponse listTasksResponse = testNodes[randomIntBetween(0, testNodes.length - 1)]\n- .transportListTasksAction.execute(new ListTasksRequest().parentTaskId(new TaskId(mainNode, mainTask.getId()))).get();\n+ .transportListTasksAction.execute(new ListTasksRequest().setParentTaskId(new TaskId(mainNode, mainTask.getId()))).get();\n assertThat(listTasksResponse.getTasks().size(), greaterThanOrEqualTo(blockOnNodes.size()));\n \n // Simulate the coordinating node leaving the cluster\n@@ -331,8 +331,8 @@ public void onFailure(Throwable e) {\n logger.info(\"--> Simulate issuing cancel request on the node that is about to leave the cluster\");\n // Simulate issuing cancel request on the node that is about to leave the cluster\n CancelTasksRequest request = new CancelTasksRequest();\n- request.reason(\"Testing Cancellation\");\n- request.taskId(new TaskId(testNodes[0].discoveryNode.getId(), mainTask.getId()));\n+ request.setReason(\"Testing Cancellation\");\n+ request.setTaskId(new TaskId(testNodes[0].discoveryNode.getId(), mainTask.getId()));\n // And send the cancellation request to a random node\n CancelTasksResponse response = testNodes[0].transportCancelTasksAction.execute(request).get();\n logger.info(\"--> Done simulating issuing cancel request on the node that is about to leave the cluster\");\n@@ -356,7 +356,7 @@ public void onFailure(Throwable e) {\n // Make sure that tasks are no longer running\n try {\n ListTasksResponse listTasksResponse1 = testNodes[randomIntBetween(1, testNodes.length - 1)]\n- .transportListTasksAction.execute(new ListTasksRequest().taskId(new TaskId(mainNode, mainTask.getId()))).get();\n+ .transportListTasksAction.execute(new ListTasksRequest().setTaskId(new TaskId(mainNode, mainTask.getId()))).get();\n assertEquals(0, listTasksResponse1.getTasks().size());\n } catch (InterruptedException ex) {\n Thread.currentThread().interrupt();", "filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/CancellableTasksTests.java", "status": "modified" }, { "diff": "@@ -18,6 +18,8 @@\n */\n package org.elasticsearch.action.admin.cluster.node.tasks;\n \n+import org.elasticsearch.ElasticsearchTimeoutException;\n+import org.elasticsearch.action.FailedNodeException;\n import org.elasticsearch.action.ListenableActionFuture;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthAction;\n import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksResponse;\n@@ -54,8 +56,10 @@\n import java.util.concurrent.locks.ReentrantLock;\n import java.util.function.Function;\n \n+import static org.elasticsearch.common.unit.TimeValue.timeValueMillis;\n import static org.hamcrest.Matchers.emptyCollectionOf;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.lessThanOrEqualTo;\n import static org.hamcrest.Matchers.not;\n \n@@ -327,6 +331,77 @@ public void testTasksUnblocking() throws Exception {\n assertEquals(0, client().admin().cluster().prepareListTasks().setActions(TestTaskPlugin.TestTaskAction.NAME + \"[n]\").get().getTasks().size());\n }\n \n+ public void testTasksListWaitForCompletion() throws Exception {\n+ // Start blocking test task\n+ ListenableActionFuture<TestTaskPlugin.NodesResponse> future = TestTaskPlugin.TestTaskAction.INSTANCE.newRequestBuilder(client())\n+ .execute();\n+\n+ ListenableActionFuture<ListTasksResponse> waitResponseFuture;\n+ try {\n+ // Wait for the task to start on all nodes\n+ assertBusy(() -> assertEquals(internalCluster().numDataAndMasterNodes(),\n+ client().admin().cluster().prepareListTasks().setActions(TestTaskPlugin.TestTaskAction.NAME + \"[n]\").get().getTasks().size()));\n+\n+ // Spin up a request to wait for that task to finish\n+ waitResponseFuture = client().admin().cluster().prepareListTasks()\n+ .setActions(TestTaskPlugin.TestTaskAction.NAME + \"[n]\").setWaitForCompletion(true).execute();\n+ } finally {\n+ // Unblock the request so the wait for completion request can finish\n+ TestTaskPlugin.UnblockTestTasksAction.INSTANCE.newRequestBuilder(client()).get();\n+ }\n+\n+ // Now that the task is unblocked the list response will come back\n+ ListTasksResponse waitResponse = waitResponseFuture.get();\n+ // If any tasks come back then they are the tasks we asked for - it'd be super weird if this wasn't true\n+ for (TaskInfo task: waitResponse.getTasks()) {\n+ assertEquals(task.getAction(), TestTaskPlugin.TestTaskAction.NAME + \"[n]\");\n+ }\n+ // See the next test to cover the timeout case\n+\n+ future.get();\n+ }\n+\n+ public void testTasksListWaitForTimeout() throws Exception {\n+ // Start blocking test task\n+ ListenableActionFuture<TestTaskPlugin.NodesResponse> future = TestTaskPlugin.TestTaskAction.INSTANCE.newRequestBuilder(client())\n+ .execute();\n+ try {\n+ // Wait for the task to start on all nodes\n+ assertBusy(() -> assertEquals(internalCluster().numDataAndMasterNodes(),\n+ client().admin().cluster().prepareListTasks().setActions(TestTaskPlugin.TestTaskAction.NAME + \"[n]\").get().getTasks().size()));\n+\n+ // Spin up a request that should wait for those tasks to finish\n+ // It will timeout because we haven't unblocked the tasks\n+ ListTasksResponse waitResponse = client().admin().cluster().prepareListTasks()\n+ .setActions(TestTaskPlugin.TestTaskAction.NAME + \"[n]\").setWaitForCompletion(true).setTimeout(timeValueMillis(100))\n+ .get();\n+\n+ assertFalse(waitResponse.getNodeFailures().isEmpty());\n+ for (FailedNodeException failure : waitResponse.getNodeFailures()) {\n+ Throwable timeoutException = failure.getCause();\n+ // The exception sometimes comes back wrapped depending on the client\n+ if (timeoutException.getCause() != null) {\n+ timeoutException = timeoutException.getCause();\n+ }\n+ assertThat(failure.getCause().getCause(), instanceOf(ElasticsearchTimeoutException.class));\n+ }\n+ } finally {\n+ // Now we can unblock those requests\n+ TestTaskPlugin.UnblockTestTasksAction.INSTANCE.newRequestBuilder(client()).get();\n+ }\n+ future.get();\n+ }\n+\n+ public void testTasksListWaitForNoTask() throws Exception {\n+ // Spin up a request to wait for no matching tasks\n+ ListenableActionFuture<ListTasksResponse> waitResponseFuture = client().admin().cluster().prepareListTasks()\n+ .setActions(TestTaskPlugin.TestTaskAction.NAME + \"[n]\").setWaitForCompletion(true).setTimeout(timeValueMillis(10))\n+ .execute();\n+\n+ // It should finish quickly and without complaint\n+ assertThat(waitResponseFuture.get().getTasks(), emptyCollectionOf(TaskInfo.class));\n+ }\n+\n @Override\n public void tearDown() throws Exception {\n for (Map.Entry<Tuple<String, String>, RecordingTaskManagerListener> entry : listeners.entrySet()) {", "filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TasksIT.java", "status": "modified" }, { "diff": "@@ -345,7 +345,10 @@ public UnblockTestTaskResponse readFrom(StreamInput in) throws IOException {\n \n \n public static class UnblockTestTasksRequest extends BaseTasksRequest<UnblockTestTasksRequest> {\n-\n+ @Override\n+ public boolean match(Task task) {\n+ return task instanceof TestTask && super.match(task);\n+ }\n }\n \n public static class UnblockTestTasksResponse extends BaseTasksResponse {", "filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TestTaskPlugin.java", "status": "modified" }, { "diff": "@@ -355,7 +355,7 @@ public void onFailure(Throwable e) {\n int testNodeNum = randomIntBetween(0, testNodes.length - 1);\n TestNode testNode = testNodes[testNodeNum];\n ListTasksRequest listTasksRequest = new ListTasksRequest();\n- listTasksRequest.actions(\"testAction*\"); // pick all test actions\n+ listTasksRequest.setActions(\"testAction*\"); // pick all test actions\n logger.info(\"Listing currently running tasks using node [{}]\", testNodeNum);\n ListTasksResponse response = testNode.transportListTasksAction.execute(listTasksRequest).get();\n logger.info(\"Checking currently running tasks\");\n@@ -371,7 +371,7 @@ public void onFailure(Throwable e) {\n // Check task counts using transport with filtering\n testNode = testNodes[randomIntBetween(0, testNodes.length - 1)];\n listTasksRequest = new ListTasksRequest();\n- listTasksRequest.actions(\"testAction[n]\"); // only pick node actions\n+ listTasksRequest.setActions(\"testAction[n]\"); // only pick node actions\n response = testNode.transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(testNodes.length, response.getPerNodeTasks().size());\n for (Map.Entry<DiscoveryNode, List<TaskInfo>> entry : response.getPerNodeTasks().entrySet()) {\n@@ -380,7 +380,7 @@ public void onFailure(Throwable e) {\n }\n \n // Check task counts using transport with detailed description\n- listTasksRequest.detailed(true); // same request only with detailed description\n+ listTasksRequest.setDetailed(true); // same request only with detailed description\n response = testNode.transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(testNodes.length, response.getPerNodeTasks().size());\n for (Map.Entry<DiscoveryNode, List<TaskInfo>> entry : response.getPerNodeTasks().entrySet()) {\n@@ -389,7 +389,7 @@ public void onFailure(Throwable e) {\n }\n \n // Make sure that the main task on coordinating node is the task that was returned to us by execute()\n- listTasksRequest.actions(\"testAction\"); // only pick the main task\n+ listTasksRequest.setActions(\"testAction\"); // only pick the main task\n response = testNode.transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(1, response.getTasks().size());\n assertEquals(mainTask.getId(), response.getTasks().get(0).getId());\n@@ -417,15 +417,15 @@ public void testFindChildTasks() throws Exception {\n \n // Get the parent task\n ListTasksRequest listTasksRequest = new ListTasksRequest();\n- listTasksRequest.actions(\"testAction\");\n+ listTasksRequest.setActions(\"testAction\");\n ListTasksResponse response = testNode.transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(1, response.getTasks().size());\n String parentNode = response.getTasks().get(0).getNode().getId();\n long parentTaskId = response.getTasks().get(0).getId();\n \n // Find tasks with common parent\n listTasksRequest = new ListTasksRequest();\n- listTasksRequest.parentTaskId(new TaskId(parentNode, parentTaskId));\n+ listTasksRequest.setParentTaskId(new TaskId(parentNode, parentTaskId));\n response = testNode.transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(testNodes.length, response.getTasks().size());\n for (TaskInfo task : response.getTasks()) {\n@@ -451,7 +451,7 @@ public void testTaskManagementOptOut() throws Exception {\n \n // Get the parent task\n ListTasksRequest listTasksRequest = new ListTasksRequest();\n- listTasksRequest.actions(\"testAction*\");\n+ listTasksRequest.setActions(\"testAction*\");\n ListTasksResponse response = testNode.transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(0, response.getTasks().size());\n \n@@ -472,7 +472,7 @@ public void testTasksDescriptions() throws Exception {\n // Check task counts using transport with filtering\n TestNode testNode = testNodes[randomIntBetween(0, testNodes.length - 1)];\n ListTasksRequest listTasksRequest = new ListTasksRequest();\n- listTasksRequest.actions(\"testAction[n]\"); // only pick node actions\n+ listTasksRequest.setActions(\"testAction[n]\"); // only pick node actions\n ListTasksResponse response = testNode.transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(testNodes.length, response.getPerNodeTasks().size());\n for (Map.Entry<DiscoveryNode, List<TaskInfo>> entry : response.getPerNodeTasks().entrySet()) {\n@@ -482,7 +482,7 @@ public void testTasksDescriptions() throws Exception {\n \n // Check task counts using transport with detailed description\n long minimalDurationNanos = System.nanoTime() - maximumStartTimeNanos;\n- listTasksRequest.detailed(true); // same request only with detailed description\n+ listTasksRequest.setDetailed(true); // same request only with detailed description\n response = testNode.transportListTasksAction.execute(listTasksRequest).get();\n assertEquals(testNodes.length, response.getPerNodeTasks().size());\n for (Map.Entry<DiscoveryNode, List<TaskInfo>> entry : response.getPerNodeTasks().entrySet()) {\n@@ -518,9 +518,9 @@ public void onFailure(Throwable e) {\n \n // Try to cancel main task using action name\n CancelTasksRequest request = new CancelTasksRequest();\n- request.nodesIds(testNodes[0].discoveryNode.getId());\n- request.reason(\"Testing Cancellation\");\n- request.actions(actionName);\n+ request.setNodesIds(testNodes[0].discoveryNode.getId());\n+ request.setReason(\"Testing Cancellation\");\n+ request.setActions(actionName);\n CancelTasksResponse response = testNodes[randomIntBetween(0, testNodes.length - 1)].transportCancelTasksAction.execute(request)\n .get();\n \n@@ -532,8 +532,8 @@ public void onFailure(Throwable e) {\n \n // Try to cancel main task using id\n request = new CancelTasksRequest();\n- request.reason(\"Testing Cancellation\");\n- request.taskId(new TaskId(testNodes[0].discoveryNode.getId(), task.getId()));\n+ request.setReason(\"Testing Cancellation\");\n+ request.setTaskId(new TaskId(testNodes[0].discoveryNode.getId(), task.getId()));\n response = testNodes[randomIntBetween(0, testNodes.length - 1)].transportCancelTasksAction.execute(request).get();\n \n // Shouldn't match any tasks since testAction doesn't support cancellation\n@@ -544,7 +544,7 @@ public void onFailure(Throwable e) {\n \n // Make sure that task is still running\n ListTasksRequest listTasksRequest = new ListTasksRequest();\n- listTasksRequest.actions(actionName);\n+ listTasksRequest.setActions(actionName);\n ListTasksResponse listResponse = testNodes[randomIntBetween(0, testNodes.length - 1)].transportListTasksAction.execute\n (listTasksRequest).get();\n assertEquals(1, listResponse.getPerNodeTasks().size());\n@@ -617,7 +617,7 @@ protected TestTaskResponse taskOperation(TestTasksRequest request, Task task) {\n // Run task action on node tasks that are currently running\n // should be successful on all nodes except one\n TestTasksRequest testTasksRequest = new TestTasksRequest();\n- testTasksRequest.actions(\"testAction[n]\"); // pick all test actions\n+ testTasksRequest.setActions(\"testAction[n]\"); // pick all test actions\n TestTasksResponse response = tasksActions[0].execute(testTasksRequest).get();\n // Get successful responses from all nodes except one\n assertEquals(testNodes.length - 1, response.tasks.size());", "filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TransportTasksActionTests.java", "status": "modified" }, { "diff": "@@ -58,9 +58,6 @@\n \n ---\n \"wait_for_completion=false\":\n- - skip:\n- version: \"0.0.0 - \"\n- reason: breaks other tests by leaving a running reindex behind\n - do:\n index:\n index: source\n@@ -79,6 +76,7 @@\n dest:\n index: dest\n - match: {task: '/.+:\\d+/'}\n+ - set: {task: task}\n - is_false: updated\n - is_false: version_conflicts\n - is_false: batches\n@@ -87,6 +85,11 @@\n - is_false: took\n - is_false: created\n \n+ - do:\n+ tasks.list:\n+ wait_for_completion: true\n+ task_id: $task\n+\n ---\n \"Response format for version conflict\":\n - do:", "filename": "modules/reindex/src/test/resources/rest-api-spec/test/reindex/10_basic.yaml", "status": "modified" }, { "diff": "@@ -37,6 +37,7 @@\n wait_for_completion: false\n index: test\n - match: {task: '/.+:\\d+/'}\n+ - set: {task: task}\n - is_false: updated\n - is_false: version_conflicts\n - is_false: batches\n@@ -45,6 +46,11 @@\n - is_false: took\n - is_false: created\n \n+ - do:\n+ tasks.list:\n+ wait_for_completion: true\n+ task_id: $task\n+\n ---\n \"Response for version conflict\":\n - do:", "filename": "modules/reindex/src/test/resources/rest-api-spec/test/update-by-query/10_basic.yaml", "status": "modified" }, { "diff": "@@ -31,6 +31,10 @@\n \"parent_task\": {\n \"type\" : \"number\",\n \"description\" : \"Return tasks with specified parent task id. Set to -1 to return all.\"\n+ },\n+ \"wait_for_completion\": {\n+ \"type\": \"boolean\",\n+ \"description\": \"Wait for the matching tasks to complete (default: false)\"\n }\n }\n },", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/tasks.list.json", "status": "modified" }, { "diff": "@@ -19,14 +19,34 @@\n \n package org.elasticsearch.test.rest;\n \n-import com.carrotsearch.randomizedtesting.RandomizedTest;\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.net.URI;\n+import java.net.URISyntaxException;\n+import java.net.URL;\n+import java.nio.file.FileSystem;\n+import java.nio.file.FileSystems;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.nio.file.StandardCopyOption;\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.Comparator;\n+import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.Set;\n+\n import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksAction;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.rest.client.RestException;\n+import org.elasticsearch.test.rest.client.RestResponse;\n import org.elasticsearch.test.rest.parser.RestTestParseException;\n import org.elasticsearch.test.rest.parser.RestTestSuiteParser;\n import org.elasticsearch.test.rest.section.DoSection;\n@@ -42,24 +62,11 @@\n import org.junit.Before;\n import org.junit.BeforeClass;\n \n-import java.io.IOException;\n-import java.io.InputStream;\n-import java.net.InetSocketAddress;\n-import java.net.URI;\n-import java.net.URISyntaxException;\n-import java.net.URL;\n-import java.nio.file.FileSystem;\n-import java.nio.file.FileSystems;\n-import java.nio.file.Files;\n-import java.nio.file.Path;\n-import java.nio.file.StandardCopyOption;\n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.Comparator;\n-import java.util.HashMap;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.Set;\n+import com.carrotsearch.randomizedtesting.RandomizedTest;\n+\n+import static java.util.Collections.emptyList;\n+import static java.util.Collections.emptyMap;\n+import static java.util.Collections.sort;\n \n /**\n * Runs the clients test suite against an elasticsearch cluster.\n@@ -261,7 +268,6 @@ private static void validateSpec(RestSpec restSpec) {\n \n @After\n public void wipeCluster() throws Exception {\n-\n // wipe indices\n Map<String, String> deleteIndicesArgs = new HashMap<>();\n deleteIndicesArgs.put(\"index\", \"*\");\n@@ -285,6 +291,30 @@ public void wipeCluster() throws Exception {\n adminExecutionContext.callApi(\"snapshot.delete_repository\", deleteSnapshotsArgs, Collections.emptyList(), Collections.emptyMap());\n }\n \n+ /**\n+ * Logs a message if there are still running tasks. The reasoning is that any tasks still running are state the is trying to bleed into\n+ * other tests.\n+ */\n+ @After\n+ public void logIfThereAreRunningTasks() throws InterruptedException, IOException, RestException {\n+ RestResponse tasks = adminExecutionContext.callApi(\"tasks.list\", emptyMap(), emptyList(), emptyMap());\n+ Set<String> runningTasks = runningTasks(tasks);\n+ // Ignore the task list API - it doens't count against us\n+ runningTasks.remove(ListTasksAction.NAME);\n+ runningTasks.remove(ListTasksAction.NAME + \"[n]\");\n+ if (runningTasks.isEmpty()) {\n+ return;\n+ }\n+ List<String> stillRunning = new ArrayList<>(runningTasks);\n+ sort(stillRunning);\n+ logger.info(\"There are still tasks running after this test that might break subsequent tests {}.\", stillRunning);\n+ /*\n+ * This isn't a higher level log or outright failure because some of these tasks are run by the cluster in the background. If we\n+ * could determine that some tasks are run by the user we'd fail the tests if those tasks were running and ignore any background\n+ * tasks.\n+ */\n+ }\n+\n @AfterClass\n public static void close() {\n if (restTestExecutionContext != null) {\n@@ -365,4 +395,19 @@ public void test() throws IOException {\n executableSection.execute(restTestExecutionContext);\n }\n }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ public Set<String> runningTasks(RestResponse response) throws IOException {\n+ Set<String> runningTasks = new HashSet<>();\n+ Map<String, Object> nodes = (Map<String, Object>) response.evaluate(\"nodes\");\n+ for (Map.Entry<String, Object> node : nodes.entrySet()) {\n+ Map<String, Object> nodeInfo = (Map<String, Object>) node.getValue();\n+ Map<String, Object> nodeTasks = (Map<String, Object>) nodeInfo.get(\"tasks\");\n+ for (Map.Entry<String, Object> taskAndName : nodeTasks.entrySet()) {\n+ Map<String, Object> task = (Map<String, Object>) taskAndName.getValue();\n+ runningTasks.add(task.get(\"action\").toString());\n+ }\n+ }\n+ return runningTasks;\n+ }\n }", "filename": "test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java", "status": "modified" }, { "diff": "@@ -114,9 +114,10 @@ public HttpRequestBuilder pathParts(String... path) {\n for (String pathPart : path) {\n try {\n finalPath.append('/');\n- URI uri = new URI(null, null, null, -1, pathPart, null, null);\n+ // We append \"/\" to the path part to handle parts that start with - or other invalid characters\n+ URI uri = new URI(null, null, null, -1, \"/\" + pathPart, null, null);\n //manually escape any slash that each part may contain\n- finalPath.append(uri.getRawPath().replaceAll(\"/\", \"%2F\"));\n+ finalPath.append(uri.getRawPath().substring(1).replaceAll(\"/\", \"%2F\"));\n } catch(URISyntaxException e) {\n throw new RuntimeException(\"unable to build uri\", e);\n }", "filename": "test/framework/src/main/java/org/elasticsearch/test/rest/client/http/HttpRequestBuilder.java", "status": "modified" } ] }
{ "body": "This commit adds a missing permission and a simple test that\nensures we discover other nodes via a mock http endpoint.\n\nCloses #16485 \n\n@rjernst I had to make this integ test a ordinary testcase since it runs against an internal cluster. I didn't explore the test fixture path yet since it also requires a test plugin etc. I wonder if we can make this one test run against the internal cluster while others runs against external clusters? I am not super happy with the test but it's a major step forward since it really tests the integration of this plugins while everything else isn't really testing it. It also would have caught the security issue though\n", "comments": [ { "body": "Interesting. I like the idea of using an HTTP server to serve the mock responses.\nThanks @s1monw for fixing this issue.\n\nLGTM\n", "created_at": "2016-02-29T14:54:33Z" }, { "body": "This looks ok to me. It is good to have some kind of test for GCE. I'm apprehensive about checking in the keyfile, but I guess it is just for a test...\n\nAt another time, I am happy to look at moving this to a fixture, as well as generating the keyfile, but this PR at least gets us into a better state than we are now.\n", "created_at": "2016-02-29T16:38:08Z" }, { "body": "+1\n", "created_at": "2016-02-29T16:52:52Z" }, { "body": "I have the same apprehension as @rjernst but given \n\n> At another time, I am happy to look at moving this to a fixture, as well as generating the keyhole\n\nI'm good with the change too. Thanks for picking this up @s1monw.\n", "created_at": "2016-02-29T16:57:58Z" }, { "body": "@rjernst @jasontedor I pushed a new commit generating the keystore from gradle\n", "created_at": "2016-02-29T21:03:21Z" }, { "body": "Looks good, thanks!\n", "created_at": "2016-02-29T21:22:12Z" }, { "body": "LGTM.\n", "created_at": "2016-02-29T22:34:23Z" } ], "number": 16860, "title": "Add setFactory permission to GceDiscoveryPlugin" }
{ "body": "This commit adds a missing permission and a simple test that\nensures we discover other nodes via a mock http endpoint.\n\nBackport of #16860\nRelates to #16485\n", "number": 16881, "review_comments": [], "title": "Backport add setFactory permission to GceDiscoveryPlugin" }
{ "commits": [ { "message": "Backort add setFactory permission to GceDiscoveryPlugin\n\nThis commit adds a missing permission and a simple test that\nensures we discover other nodes via a mock http endpoint.\n\nBackport of #16860\nRelates to #16485" } ], "files": [ { "diff": "@@ -40,9 +40,12 @@\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.network.NetworkAddress;\n import org.elasticsearch.common.network.NetworkModule;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.SettingsModule;\n+import org.elasticsearch.common.transport.BoundTransportAddress;\n+import org.elasticsearch.common.transport.TransportAddress;\n import org.elasticsearch.discovery.Discovery;\n import org.elasticsearch.discovery.DiscoveryModule;\n import org.elasticsearch.discovery.DiscoveryService;\n@@ -55,6 +58,7 @@\n import org.elasticsearch.gateway.GatewayService;\n import org.elasticsearch.http.HttpServer;\n import org.elasticsearch.http.HttpServerModule;\n+import org.elasticsearch.http.HttpServerTransport;\n import org.elasticsearch.index.search.shape.ShapeModule;\n import org.elasticsearch.indices.IndicesModule;\n import org.elasticsearch.indices.IndicesService;\n@@ -93,7 +97,15 @@\n import org.elasticsearch.watcher.ResourceWatcherModule;\n import org.elasticsearch.watcher.ResourceWatcherService;\n \n+import java.io.BufferedWriter;\n import java.io.IOException;\n+import java.net.Inet6Address;\n+import java.net.InetAddress;\n+import java.net.InetSocketAddress;\n+import java.nio.charset.Charset;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.nio.file.StandardCopyOption;\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n@@ -275,7 +287,14 @@ public Node start() {\n injector.getInstance(HttpServer.class).start();\n }\n injector.getInstance(TribeService.class).start();\n-\n+ if (settings.getAsBoolean(\"node.portsfile\", false)) {\n+ if (settings.getAsBoolean(\"http.enabled\", true)) {\n+ HttpServerTransport http = injector.getInstance(HttpServerTransport.class);\n+ writePortsFile(\"http\", http.boundAddress());\n+ }\n+ TransportService transport = injector.getInstance(TransportService.class);\n+ writePortsFile(\"transport\", transport.boundAddress());\n+ }\n logger.info(\"started\");\n \n return this;\n@@ -426,4 +445,27 @@ public boolean isClosed() {\n public Injector injector() {\n return this.injector;\n }\n+\n+ /** Writes a file to the logs dir containing the ports for the given transport type */\n+ private void writePortsFile(String type, BoundTransportAddress boundAddress) {\n+ Path tmpPortsFile = environment.logsFile().resolve(type + \".ports.tmp\");\n+ try (BufferedWriter writer = Files.newBufferedWriter(tmpPortsFile, Charset.forName(\"UTF-8\"))) {\n+ for (TransportAddress address : boundAddress.boundAddresses()) {\n+ InetAddress inetAddress = InetAddress.getByName(address.getAddress());\n+ if (inetAddress instanceof Inet6Address && inetAddress.isLinkLocalAddress()) {\n+ // no link local, just causes problems\n+ continue;\n+ }\n+ writer.write(NetworkAddress.formatAddress(new InetSocketAddress(inetAddress, address.getPort())) + \"\\n\");\n+ }\n+ } catch (IOException e) {\n+ throw new RuntimeException(\"Failed to write ports file\", e);\n+ }\n+ Path portsFile = environment.logsFile().resolve(type + \".ports\");\n+ try {\n+ Files.move(tmpPortsFile, portsFile, StandardCopyOption.ATOMIC_MOVE);\n+ } catch (IOException e) {\n+ throw new RuntimeException(\"Failed to rename ports file\", e);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/node/Node.java", "status": "modified" }, { "diff": "@@ -31,6 +31,8 @@ governing permissions and limitations under the License. -->\n <!-- currently has no unit tests -->\n <tests.rest.suite>cloud_gce</tests.rest.suite>\n <tests.rest.load_packaged>false</tests.rest.load_packaged>\n+ <ssl.antfile>${project.basedir}/ssl-setup.xml</ssl.antfile>\n+ <keystore>${project.build.testOutputDirectory}/test-node.jks</keystore>\n </properties>\n \n <dependencies>\n@@ -61,6 +63,30 @@ governing permissions and limitations under the License. -->\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-assembly-plugin</artifactId>\n </plugin>\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-antrun-plugin</artifactId>\n+ <executions>\n+ <!-- generate certificates/keys -->\n+ <execution>\n+ <id>certificate-setup</id>\n+ <phase>generate-test-resources</phase>\n+ <goals>\n+ <goal>run</goal>\n+ </goals>\n+ <configuration>\n+ <target>\n+ <delete file=\"${keystore}\" quiet=\"true\"/> <!-- must clean it up first -->\n+ <mkdir dir=\"${project.build.testOutputDirectory}\"/> <!-- the directory must exist first -->\n+ <exec executable=\"keytool\" failonerror=\"true\">\n+ <arg line=\"-genkey -keyalg RSA -alias selfsigned -keystore ${keystore} -storepass keypass -keypass keypass -validity 712 -keysize 2048 -dname CN=127.0.0.1\"/>\n+ </exec>\n+ </target>\n+ <skip>${skip.integ.tests}</skip>\n+ </configuration>\n+ </execution>\n+ </executions>\n+ </plugin>\n </plugins>\n </build>\n ", "filename": "plugins/cloud-gce/pom.xml", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import com.google.api.client.http.HttpHeaders;\n import com.google.api.client.http.HttpResponse;\n import com.google.api.client.http.HttpTransport;\n+import com.google.api.client.http.javanet.NetHttpTransport;\n import com.google.api.client.json.JsonFactory;\n import com.google.api.client.json.jackson2.JacksonFactory;\n import com.google.api.services.compute.Compute;\n@@ -70,8 +71,15 @@ public class GceComputeServiceImpl extends AbstractLifecycleComponent<GceCompute\n // Forcing Google Token API URL as set in GCE SDK to\n // http://metadata/computeMetadata/v1/instance/service-accounts/default/token\n // See https://developers.google.com/compute/docs/metadata#metadataserver\n- public static final String GCE_METADATA_URL = \"http://metadata.google.internal/computeMetadata/v1/instance\";\n- public static final String TOKEN_SERVER_ENCODED_URL = GCE_METADATA_URL + \"/service-accounts/default/token\";\n+ // all settings just used for testing - not registered by default\n+ private static final String DEFAULT_GCE_HOST = \"http://metadata.google.internal\";\n+ private static final String DEFAULT_GCE_ROOT_URL = \"https://www.googleapis.com\";\n+\n+ private final String gceHost;\n+ private final String metaDataUrl;\n+ private final String tokenServerEncodedUrl;\n+ private final String gceRootUrl;\n+ private final Boolean validateCerts;\n \n @Override\n public Collection<Instance> instances() {\n@@ -120,7 +128,7 @@ public InstanceList run() throws Exception {\n \n @Override\n public String metadata(String metadataPath) throws IOException {\n- String urlMetadataNetwork = GCE_METADATA_URL + \"/\" + metadataPath;\n+ String urlMetadataNetwork = this.metaDataUrl + \"/\" + metadataPath;\n logger.debug(\"get metadata from [{}]\", urlMetadataNetwork);\n URL url = new URL(urlMetadataNetwork);\n HttpHeaders headers;\n@@ -166,11 +174,22 @@ public GceComputeServiceImpl(Settings settings, NetworkService networkService) {\n String[] zoneList = settings.getAsArray(Fields.ZONE);\n this.zones = Arrays.asList(zoneList);\n networkService.addCustomNameResolver(new GceNameResolver(settings, this));\n+\n+ this.gceHost = settings.get(\"cloud.gce.host\", DEFAULT_GCE_HOST);\n+ this.metaDataUrl = gceHost + \"/computeMetadata/v1/instance\";\n+ this.gceRootUrl = settings.get(\"cloud.gce.root_url\", DEFAULT_GCE_ROOT_URL);\n+ this.tokenServerEncodedUrl = metaDataUrl + \"/service-accounts/default/token\";\n+ this.validateCerts = settings.getAsBoolean(\"cloud.gce.validate_certificates\", true);\n }\n \n protected synchronized HttpTransport getGceHttpTransport() throws GeneralSecurityException, IOException {\n if (gceHttpTransport == null) {\n- gceHttpTransport = GoogleNetHttpTransport.newTrustedTransport();\n+ if (validateCerts) {\n+ gceHttpTransport = GoogleNetHttpTransport.newTrustedTransport();\n+ } else {\n+ // this is only used for testing - alternative we could use the defaul keystore but this requires special configs too..\n+ gceHttpTransport = new NetHttpTransport.Builder().doNotValidateCertificate().build();\n+ }\n }\n return gceHttpTransport;\n }\n@@ -190,7 +209,7 @@ public synchronized Compute client() {\n \n logger.info(\"starting GCE discovery service\");\n final ComputeCredential credential = new ComputeCredential.Builder(getGceHttpTransport(), gceJsonFactory)\n- .setTokenServerEncodedUrl(TOKEN_SERVER_ENCODED_URL)\n+ .setTokenServerEncodedUrl(this.tokenServerEncodedUrl)\n .build();\n \n // hack around code messiness in GCE code\n@@ -212,9 +231,9 @@ public Void run() throws IOException {\n refreshInterval = TimeValue.timeValueSeconds(credential.getExpiresInSeconds()-1);\n }\n \n- boolean ifRetry = settings.getAsBoolean(Fields.RETRY, true);\n- Compute.Builder builder = new Compute.Builder(getGceHttpTransport(), gceJsonFactory, null)\n- .setApplicationName(Fields.VERSION);\n+ final boolean ifRetry = settings.getAsBoolean(Fields.RETRY, true);\n+ final Compute.Builder builder = new Compute.Builder(getGceHttpTransport(), gceJsonFactory, null)\n+ .setApplicationName(Fields.VERSION).setRootUrl(gceRootUrl);;\n \n if (ifRetry) {\n int maxWait = settings.getAsInt(Fields.MAXWAIT, -1);", "filename": "plugins/cloud-gce/src/main/java/org/elasticsearch/cloud/gce/GceComputeServiceImpl.java", "status": "modified" }, { "diff": "@@ -20,4 +20,5 @@\n grant {\n // needed because of problems in gce\n permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\n+ permission java.lang.RuntimePermission \"setFactory\";\n };", "filename": "plugins/cloud-gce/src/main/plugin-metadata/plugin-security.policy", "status": "modified" }, { "diff": "@@ -44,6 +44,8 @@\n public class GceComputeServiceMock extends GceComputeServiceImpl {\n \n protected HttpTransport mockHttpTransport;\n+ private static final String GCE_METADATA_URL = \"http://metadata.google.internal/computeMetadata/v1/instance\";\n+\n \n public GceComputeServiceMock(Settings settings, NetworkService networkService) {\n super(settings, networkService);\n@@ -80,19 +82,18 @@ public LowLevelHttpResponse execute() throws IOException {\n };\n }\n \n- private String readGoogleInternalJsonResponse(String url) throws IOException {\n+ public static String readGoogleInternalJsonResponse(String url) throws IOException {\n return readJsonResponse(url, \"http://metadata.google.internal/\");\n }\n \n- private String readGoogleApiJsonResponse(String url) throws IOException {\n+ public static String readGoogleApiJsonResponse(String url) throws IOException {\n return readJsonResponse(url, \"https://www.googleapis.com/\");\n }\n \n- private String readJsonResponse(String url, String urlRoot) throws IOException {\n+ public static String readJsonResponse(String url, String urlRoot) throws IOException {\n // We extract from the url the mock file path we want to use\n String mockFileName = Strings.replace(url, urlRoot, \"\");\n \n- logger.debug(\"--> read mock file from [{}]\", mockFileName);\n URL resource = GceComputeServiceMock.class.getResource(mockFileName);\n if (resource == null) {\n throw new IOException(\"can't read [\" + url + \"] in src/test/resources/org/elasticsearch/discovery/gce\");\n@@ -106,7 +107,6 @@ public void handle(String s) {\n }\n });\n String response = sb.toString();\n- logger.trace(\"{}\", response);\n return response;\n }\n }", "filename": "plugins/cloud-gce/src/test/java/org/elasticsearch/discovery/gce/GceComputeServiceMock.java", "status": "modified" }, { "diff": "@@ -0,0 +1,209 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.discovery.gce;\n+\n+import com.sun.net.httpserver.Headers;\n+import com.sun.net.httpserver.HttpExchange;\n+import com.sun.net.httpserver.HttpHandler;\n+import com.sun.net.httpserver.HttpServer;\n+import com.sun.net.httpserver.HttpsConfigurator;\n+import com.sun.net.httpserver.HttpsServer;\n+import org.elasticsearch.common.SuppressForbidden;\n+import org.elasticsearch.common.io.FileSystemUtils;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.plugin.cloud.gce.CloudGcePlugin;\n+import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n+\n+import javax.net.ssl.KeyManagerFactory;\n+import javax.net.ssl.SSLContext;\n+import javax.net.ssl.TrustManagerFactory;\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.io.OutputStream;\n+import java.net.InetAddress;\n+import java.net.InetSocketAddress;\n+import java.nio.charset.StandardCharsets;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.security.KeyStore;\n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoTimeout;\n+\n+\n+@ESIntegTestCase.SuppressLocalMode\n+@ESIntegTestCase.ClusterScope(numDataNodes = 2, numClientNodes = 0)\n+@SuppressForbidden(reason = \"use http server\")\n+// TODO this should be a IT but currently all ITs in this project run against a real cluster\n+public class GceDiscoverTests extends ESIntegTestCase {\n+ private static HttpsServer httpsServer;\n+ private static HttpServer httpServer;\n+ private static Path logDir;\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return pluginList(CloudGcePlugin.class);\n+ }\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ Path resolve = logDir.resolve(Integer.toString(nodeOrdinal));\n+ try {\n+ Files.createDirectory(resolve);\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n+ }\n+ return Settings.builder().put(super.nodeSettings(nodeOrdinal))\n+ .put(\"discovery.type\", \"gce\")\n+ .put(\"path.logs\", resolve)\n+ .put(\"transport.tcp.port\", 0)\n+ .put(\"node.portsfile\", \"true\")\n+ .put(\"cloud.gce.project_id\", \"testproject\")\n+ .put(\"cloud.gce.zone\", \"primaryzone\")\n+ .put(\"discovery.initial_state_timeout\", \"1s\")\n+ .put(\"cloud.gce.host\", \"http://\" + httpServer.getAddress().getHostName() + \":\" + httpServer.getAddress().getPort())\n+ .put(\"cloud.gce.root_url\", \"https://\" + httpsServer.getAddress().getHostName() +\n+ \":\" + httpsServer.getAddress().getPort())\n+ // this is annoying but by default the client pulls a static list of trusted CAs\n+ .put(\"cloud.gce.validate_certificates\", false)\n+ .build();\n+ }\n+\n+ @BeforeClass\n+ public static void startHttpd() throws Exception {\n+ logDir = createTempDir();\n+ SSLContext sslContext = getSSLContext();\n+ // we generated the keyfile with 127.0.0.1 - hence the hardcoding\n+ httpsServer = HttpsServer.create(new InetSocketAddress(\"127.0.0.1\", 0), 0);\n+ httpServer = HttpServer.create(new InetSocketAddress(\"127.0.0.1\", 0), 0);\n+ httpsServer.setHttpsConfigurator(new HttpsConfigurator(sslContext));\n+ httpServer.createContext(\"/computeMetadata/v1/instance/service-accounts/default/token\", new AuthHandler());\n+\n+ httpsServer.createContext(\"/compute/v1/projects/testproject/zones/primaryzone/instances\", new InstanceHandler());\n+ httpsServer.start();\n+ httpServer.start();\n+ }\n+\n+ private static SSLContext getSSLContext() throws Exception {\n+ char[] passphrase = \"keypass\".toCharArray();\n+ KeyStore ks = KeyStore.getInstance(\"JKS\");\n+ try (InputStream stream = GceDiscoverTests.class.getResourceAsStream(\"/test-node.jks\")) {\n+ assertNotNull(\"can't find keystore file\", stream);\n+ ks.load(stream, passphrase);\n+ }\n+ KeyManagerFactory kmf = KeyManagerFactory.getInstance(\"SunX509\");\n+ kmf.init(ks, passphrase);\n+ TrustManagerFactory tmf = TrustManagerFactory.getInstance(\"SunX509\");\n+ tmf.init(ks);\n+ SSLContext ssl = SSLContext.getInstance(\"TLS\");\n+ ssl.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null);\n+ return ssl;\n+ }\n+\n+ @AfterClass\n+ public static void stopHttpd() throws IOException {\n+ for (int i = 0; i < internalCluster().size(); i++) {\n+ // shut them all down otherwise we get spammed with connection refused exceptions\n+ internalCluster().stopRandomDataNode();\n+ }\n+ httpsServer.stop(0);\n+ httpServer.stop(0);\n+ httpsServer = null;\n+ httpServer = null;\n+ logDir = null;\n+ }\n+\n+ public void testJoin() throws ExecutionException, InterruptedException {\n+ // only wait for the cluster to form\n+ assertNoTimeout(client().admin().cluster().prepareHealth().setWaitForNodes(Integer.toString(2)).get());\n+ // add one more node and wait for it to join\n+ internalCluster().startDataOnlyNodeAsync().get();\n+ assertNoTimeout(client().admin().cluster().prepareHealth().setWaitForNodes(Integer.toString(3)).get());\n+ }\n+\n+ @SuppressForbidden(reason = \"use http server\")\n+ private static class InstanceHandler implements HttpHandler {\n+ @Override\n+ public void handle(HttpExchange s) throws IOException {\n+ Headers headers = s.getResponseHeaders();\n+ headers.add(\"Content-Type\", \"application/json; charset=UTF-8\");\n+ ESLogger logger = Loggers.getLogger(GceDiscoverTests.class);\n+ try {\n+ Path[] files = FileSystemUtils.files(logDir);\n+ StringBuilder builder = new StringBuilder(\"{\\\"id\\\": \\\"dummy\\\",\\\"items\\\":[\");\n+ int foundFiles = 0;\n+ for (int i = 0; i < files.length; i++) {\n+ Path resolve = files[i].resolve(\"transport.ports\");\n+ if (Files.exists(resolve)) {\n+ if (foundFiles++ > 0) {\n+ builder.append(\",\");\n+ }\n+ List<String> addressses = Files.readAllLines(resolve, StandardCharsets.UTF_8);\n+ Collections.shuffle(addressses, random());\n+ logger.debug(\"addresses for node: [{}] published addresses [{}]\", files[i].getFileName(), addressses);\n+ builder.append(\"{\\\"description\\\": \\\"ES Node \").append(files[i].getFileName())\n+ .append(\"\\\",\\\"networkInterfaces\\\": [ {\");\n+ builder.append(\"\\\"networkIP\\\": \\\"\").append(addressses.get(0)).append(\"\\\"}],\");\n+ builder.append(\"\\\"status\\\" : \\\"RUNNING\\\"}\");\n+ }\n+ }\n+ builder.append(\"]}\");\n+ String responseString = builder.toString();\n+ logger.warn(\"{}\", responseString);\n+ final byte[] responseAsBytes = responseString.getBytes(StandardCharsets.UTF_8);\n+ s.sendResponseHeaders(200, responseAsBytes.length);\n+ OutputStream responseBody = s.getResponseBody();\n+ responseBody.write(responseAsBytes);\n+ responseBody.close();\n+ } catch (Exception e) {\n+ //\n+ byte[] responseAsBytes = (\"{ \\\"error\\\" : {\\\"message\\\" : \\\"\" + e.toString() + \"\\\" } }\").getBytes(StandardCharsets.UTF_8);\n+ s.sendResponseHeaders(500, responseAsBytes.length);\n+ OutputStream responseBody = s.getResponseBody();\n+ responseBody.write(responseAsBytes);\n+ responseBody.close();\n+ }\n+\n+\n+ }\n+ }\n+\n+ @SuppressForbidden(reason = \"use http server\")\n+ private static class AuthHandler implements HttpHandler {\n+ @Override\n+ public void handle(HttpExchange s) throws IOException {\n+ String response = GceComputeServiceMock.readGoogleInternalJsonResponse(\n+ \"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token\");\n+ byte[] responseAsBytes = response.getBytes(StandardCharsets.UTF_8);\n+ s.sendResponseHeaders(200, responseAsBytes.length);\n+ OutputStream responseBody = s.getResponseBody();\n+ responseBody.write(responseAsBytes);\n+ responseBody.close();\n+ }\n+ }\n+}", "filename": "plugins/cloud-gce/src/test/java/org/elasticsearch/discovery/gce/GceDiscoverTests.java", "status": "added" } ] }
{ "body": "Get complaints about java permissions but adding them doesn't seem to fix...\n\n[2016-02-05 11:39:42,346][WARN ][discovery.gce ] [homer] Exception caught during discovery: access denied (\"java.lang.RuntimePermission\" \"setFactory\") java.security.AccessControlException: access denied (\"java.lang.RuntimePermission\" \"setFactory\") at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372) at java.security.AccessController.checkPermission(AccessController.java:559) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkSetFactory(SecurityManager.java:1625) at javax.net.ssl.HttpsURLConnection.setSSLSocketFactory(HttpsURLConnection.java:362) at com.google.api.client.http.javanet.NetHttpTransport.buildRequest(NetHttpTransport.java:145) at com.google.api.client.http.javanet.NetHttpTransport.buildRequest(NetHttpTransport.java:62) at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:863) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469) at org.elasticsearch.cloud.gce.GceComputeServiceImpl$1$1.run(GceComputeServiceImpl.java:94) at org.elasticsearch.cloud.gce.GceComputeServiceImpl$1$1.run(GceComputeServiceImpl.java:90) at java.security.AccessController.doPrivileged(Native Method) \n\nGlad to provide any information needed to help troubleshoot.\n\nI answered 'y' (yes) to the prompt to add security options to java, also attempted to add them manually. Still no luck. Uninstalled and reinstalled the plugin...\n\nI rolled back to 2.1.1 and it works again...\n", "comments": [ { "body": "Same for me :+1: \n", "created_at": "2016-02-08T11:34:46Z" }, { "body": "The cloud-gce plugin works with elasticsearch 2.2 as long as no other plugins are installed. Then this is causing the following error : \n*\\* Exception caught during discovery: access denied (\"java.lang.RuntimePermission\" \"setFactory\")**\n", "created_at": "2016-02-13T00:33:28Z" }, { "body": "@jasontedor could you take a look at this please?\n", "created_at": "2016-02-13T12:20:29Z" }, { "body": "> Get complaints about java permissions but adding them doesn't seem to fix...\n\n@phutchins Where did you add the `setFactory` permission?\n", "created_at": "2016-02-13T12:24:16Z" }, { "body": "@jasontedor I tried the server.policy file and I believe one other place that I can't recall at the moment...\n", "created_at": "2016-02-13T15:41:20Z" }, { "body": "> I tried the server.policy file and I believe one other place that I can't recall at the moment...\n\n@phutchins Would you be willing to try adding it to `${path.plugins}/cloud-gce/plugin-security.policy` as I think that that is where you'll need to add it (depends on where your plugins are installed)? Can you add\n\n```\npermission java.lang.RuntimePermission \"setFactory\";\n```\n\nto the `grant` entries there? If you bump into any other permissions that are needed, add them there too (reading through the [code](https://github.com/google/google-http-java-client/blob/1.20.0/google-http-client/src/main/java/com/google/api/client/http/javanet/NetHttpTransport.java) and the [docs](https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/HttpsURLConnection.html), another one that _might_ come up is `permission javax.net.ssl.SSLPermission \"setHostnameVerifier\";`). Please note that adding any additional security permissions is at your own risk. You can read about the [`setFactory` runtime permission](http://download.java.net/jdk7/archive/b123/docs/api/java/lang/RuntimePermission.html) and the _possible_ [`setHostNameVerifier` SSL permission](https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLPermission.html) but if you report them back here we will do a thorough investigation on them in advance of the next patch release.\n", "created_at": "2016-02-13T17:33:02Z" }, { "body": "@phutchins I'd like to make sure this can be addressed for a maintenance release. Have you had the opportunity to try the above settings?\n", "created_at": "2016-02-25T15:02:53Z" }, { "body": "Hey @jasontedor, I've not yet but will as soon as I get a chance. Hope to get back to you soon.\n", "created_at": "2016-02-26T18:09:55Z" } ], "number": 16485, "title": "Plugin does not work with ES 2.2.0" }
{ "body": "This commit adds a missing permission and a simple test that\nensures we discover other nodes via a mock http endpoint.\n\nBackport of #16860\nRelates to #16485\n", "number": 16881, "review_comments": [], "title": "Backport add setFactory permission to GceDiscoveryPlugin" }
{ "commits": [ { "message": "Backort add setFactory permission to GceDiscoveryPlugin\n\nThis commit adds a missing permission and a simple test that\nensures we discover other nodes via a mock http endpoint.\n\nBackport of #16860\nRelates to #16485" } ], "files": [ { "diff": "@@ -40,9 +40,12 @@\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.network.NetworkAddress;\n import org.elasticsearch.common.network.NetworkModule;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.SettingsModule;\n+import org.elasticsearch.common.transport.BoundTransportAddress;\n+import org.elasticsearch.common.transport.TransportAddress;\n import org.elasticsearch.discovery.Discovery;\n import org.elasticsearch.discovery.DiscoveryModule;\n import org.elasticsearch.discovery.DiscoveryService;\n@@ -55,6 +58,7 @@\n import org.elasticsearch.gateway.GatewayService;\n import org.elasticsearch.http.HttpServer;\n import org.elasticsearch.http.HttpServerModule;\n+import org.elasticsearch.http.HttpServerTransport;\n import org.elasticsearch.index.search.shape.ShapeModule;\n import org.elasticsearch.indices.IndicesModule;\n import org.elasticsearch.indices.IndicesService;\n@@ -93,7 +97,15 @@\n import org.elasticsearch.watcher.ResourceWatcherModule;\n import org.elasticsearch.watcher.ResourceWatcherService;\n \n+import java.io.BufferedWriter;\n import java.io.IOException;\n+import java.net.Inet6Address;\n+import java.net.InetAddress;\n+import java.net.InetSocketAddress;\n+import java.nio.charset.Charset;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.nio.file.StandardCopyOption;\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n@@ -275,7 +287,14 @@ public Node start() {\n injector.getInstance(HttpServer.class).start();\n }\n injector.getInstance(TribeService.class).start();\n-\n+ if (settings.getAsBoolean(\"node.portsfile\", false)) {\n+ if (settings.getAsBoolean(\"http.enabled\", true)) {\n+ HttpServerTransport http = injector.getInstance(HttpServerTransport.class);\n+ writePortsFile(\"http\", http.boundAddress());\n+ }\n+ TransportService transport = injector.getInstance(TransportService.class);\n+ writePortsFile(\"transport\", transport.boundAddress());\n+ }\n logger.info(\"started\");\n \n return this;\n@@ -426,4 +445,27 @@ public boolean isClosed() {\n public Injector injector() {\n return this.injector;\n }\n+\n+ /** Writes a file to the logs dir containing the ports for the given transport type */\n+ private void writePortsFile(String type, BoundTransportAddress boundAddress) {\n+ Path tmpPortsFile = environment.logsFile().resolve(type + \".ports.tmp\");\n+ try (BufferedWriter writer = Files.newBufferedWriter(tmpPortsFile, Charset.forName(\"UTF-8\"))) {\n+ for (TransportAddress address : boundAddress.boundAddresses()) {\n+ InetAddress inetAddress = InetAddress.getByName(address.getAddress());\n+ if (inetAddress instanceof Inet6Address && inetAddress.isLinkLocalAddress()) {\n+ // no link local, just causes problems\n+ continue;\n+ }\n+ writer.write(NetworkAddress.formatAddress(new InetSocketAddress(inetAddress, address.getPort())) + \"\\n\");\n+ }\n+ } catch (IOException e) {\n+ throw new RuntimeException(\"Failed to write ports file\", e);\n+ }\n+ Path portsFile = environment.logsFile().resolve(type + \".ports\");\n+ try {\n+ Files.move(tmpPortsFile, portsFile, StandardCopyOption.ATOMIC_MOVE);\n+ } catch (IOException e) {\n+ throw new RuntimeException(\"Failed to rename ports file\", e);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/node/Node.java", "status": "modified" }, { "diff": "@@ -31,6 +31,8 @@ governing permissions and limitations under the License. -->\n <!-- currently has no unit tests -->\n <tests.rest.suite>cloud_gce</tests.rest.suite>\n <tests.rest.load_packaged>false</tests.rest.load_packaged>\n+ <ssl.antfile>${project.basedir}/ssl-setup.xml</ssl.antfile>\n+ <keystore>${project.build.testOutputDirectory}/test-node.jks</keystore>\n </properties>\n \n <dependencies>\n@@ -61,6 +63,30 @@ governing permissions and limitations under the License. -->\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-assembly-plugin</artifactId>\n </plugin>\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-antrun-plugin</artifactId>\n+ <executions>\n+ <!-- generate certificates/keys -->\n+ <execution>\n+ <id>certificate-setup</id>\n+ <phase>generate-test-resources</phase>\n+ <goals>\n+ <goal>run</goal>\n+ </goals>\n+ <configuration>\n+ <target>\n+ <delete file=\"${keystore}\" quiet=\"true\"/> <!-- must clean it up first -->\n+ <mkdir dir=\"${project.build.testOutputDirectory}\"/> <!-- the directory must exist first -->\n+ <exec executable=\"keytool\" failonerror=\"true\">\n+ <arg line=\"-genkey -keyalg RSA -alias selfsigned -keystore ${keystore} -storepass keypass -keypass keypass -validity 712 -keysize 2048 -dname CN=127.0.0.1\"/>\n+ </exec>\n+ </target>\n+ <skip>${skip.integ.tests}</skip>\n+ </configuration>\n+ </execution>\n+ </executions>\n+ </plugin>\n </plugins>\n </build>\n ", "filename": "plugins/cloud-gce/pom.xml", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import com.google.api.client.http.HttpHeaders;\n import com.google.api.client.http.HttpResponse;\n import com.google.api.client.http.HttpTransport;\n+import com.google.api.client.http.javanet.NetHttpTransport;\n import com.google.api.client.json.JsonFactory;\n import com.google.api.client.json.jackson2.JacksonFactory;\n import com.google.api.services.compute.Compute;\n@@ -70,8 +71,15 @@ public class GceComputeServiceImpl extends AbstractLifecycleComponent<GceCompute\n // Forcing Google Token API URL as set in GCE SDK to\n // http://metadata/computeMetadata/v1/instance/service-accounts/default/token\n // See https://developers.google.com/compute/docs/metadata#metadataserver\n- public static final String GCE_METADATA_URL = \"http://metadata.google.internal/computeMetadata/v1/instance\";\n- public static final String TOKEN_SERVER_ENCODED_URL = GCE_METADATA_URL + \"/service-accounts/default/token\";\n+ // all settings just used for testing - not registered by default\n+ private static final String DEFAULT_GCE_HOST = \"http://metadata.google.internal\";\n+ private static final String DEFAULT_GCE_ROOT_URL = \"https://www.googleapis.com\";\n+\n+ private final String gceHost;\n+ private final String metaDataUrl;\n+ private final String tokenServerEncodedUrl;\n+ private final String gceRootUrl;\n+ private final Boolean validateCerts;\n \n @Override\n public Collection<Instance> instances() {\n@@ -120,7 +128,7 @@ public InstanceList run() throws Exception {\n \n @Override\n public String metadata(String metadataPath) throws IOException {\n- String urlMetadataNetwork = GCE_METADATA_URL + \"/\" + metadataPath;\n+ String urlMetadataNetwork = this.metaDataUrl + \"/\" + metadataPath;\n logger.debug(\"get metadata from [{}]\", urlMetadataNetwork);\n URL url = new URL(urlMetadataNetwork);\n HttpHeaders headers;\n@@ -166,11 +174,22 @@ public GceComputeServiceImpl(Settings settings, NetworkService networkService) {\n String[] zoneList = settings.getAsArray(Fields.ZONE);\n this.zones = Arrays.asList(zoneList);\n networkService.addCustomNameResolver(new GceNameResolver(settings, this));\n+\n+ this.gceHost = settings.get(\"cloud.gce.host\", DEFAULT_GCE_HOST);\n+ this.metaDataUrl = gceHost + \"/computeMetadata/v1/instance\";\n+ this.gceRootUrl = settings.get(\"cloud.gce.root_url\", DEFAULT_GCE_ROOT_URL);\n+ this.tokenServerEncodedUrl = metaDataUrl + \"/service-accounts/default/token\";\n+ this.validateCerts = settings.getAsBoolean(\"cloud.gce.validate_certificates\", true);\n }\n \n protected synchronized HttpTransport getGceHttpTransport() throws GeneralSecurityException, IOException {\n if (gceHttpTransport == null) {\n- gceHttpTransport = GoogleNetHttpTransport.newTrustedTransport();\n+ if (validateCerts) {\n+ gceHttpTransport = GoogleNetHttpTransport.newTrustedTransport();\n+ } else {\n+ // this is only used for testing - alternative we could use the defaul keystore but this requires special configs too..\n+ gceHttpTransport = new NetHttpTransport.Builder().doNotValidateCertificate().build();\n+ }\n }\n return gceHttpTransport;\n }\n@@ -190,7 +209,7 @@ public synchronized Compute client() {\n \n logger.info(\"starting GCE discovery service\");\n final ComputeCredential credential = new ComputeCredential.Builder(getGceHttpTransport(), gceJsonFactory)\n- .setTokenServerEncodedUrl(TOKEN_SERVER_ENCODED_URL)\n+ .setTokenServerEncodedUrl(this.tokenServerEncodedUrl)\n .build();\n \n // hack around code messiness in GCE code\n@@ -212,9 +231,9 @@ public Void run() throws IOException {\n refreshInterval = TimeValue.timeValueSeconds(credential.getExpiresInSeconds()-1);\n }\n \n- boolean ifRetry = settings.getAsBoolean(Fields.RETRY, true);\n- Compute.Builder builder = new Compute.Builder(getGceHttpTransport(), gceJsonFactory, null)\n- .setApplicationName(Fields.VERSION);\n+ final boolean ifRetry = settings.getAsBoolean(Fields.RETRY, true);\n+ final Compute.Builder builder = new Compute.Builder(getGceHttpTransport(), gceJsonFactory, null)\n+ .setApplicationName(Fields.VERSION).setRootUrl(gceRootUrl);;\n \n if (ifRetry) {\n int maxWait = settings.getAsInt(Fields.MAXWAIT, -1);", "filename": "plugins/cloud-gce/src/main/java/org/elasticsearch/cloud/gce/GceComputeServiceImpl.java", "status": "modified" }, { "diff": "@@ -20,4 +20,5 @@\n grant {\n // needed because of problems in gce\n permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\n+ permission java.lang.RuntimePermission \"setFactory\";\n };", "filename": "plugins/cloud-gce/src/main/plugin-metadata/plugin-security.policy", "status": "modified" }, { "diff": "@@ -44,6 +44,8 @@\n public class GceComputeServiceMock extends GceComputeServiceImpl {\n \n protected HttpTransport mockHttpTransport;\n+ private static final String GCE_METADATA_URL = \"http://metadata.google.internal/computeMetadata/v1/instance\";\n+\n \n public GceComputeServiceMock(Settings settings, NetworkService networkService) {\n super(settings, networkService);\n@@ -80,19 +82,18 @@ public LowLevelHttpResponse execute() throws IOException {\n };\n }\n \n- private String readGoogleInternalJsonResponse(String url) throws IOException {\n+ public static String readGoogleInternalJsonResponse(String url) throws IOException {\n return readJsonResponse(url, \"http://metadata.google.internal/\");\n }\n \n- private String readGoogleApiJsonResponse(String url) throws IOException {\n+ public static String readGoogleApiJsonResponse(String url) throws IOException {\n return readJsonResponse(url, \"https://www.googleapis.com/\");\n }\n \n- private String readJsonResponse(String url, String urlRoot) throws IOException {\n+ public static String readJsonResponse(String url, String urlRoot) throws IOException {\n // We extract from the url the mock file path we want to use\n String mockFileName = Strings.replace(url, urlRoot, \"\");\n \n- logger.debug(\"--> read mock file from [{}]\", mockFileName);\n URL resource = GceComputeServiceMock.class.getResource(mockFileName);\n if (resource == null) {\n throw new IOException(\"can't read [\" + url + \"] in src/test/resources/org/elasticsearch/discovery/gce\");\n@@ -106,7 +107,6 @@ public void handle(String s) {\n }\n });\n String response = sb.toString();\n- logger.trace(\"{}\", response);\n return response;\n }\n }", "filename": "plugins/cloud-gce/src/test/java/org/elasticsearch/discovery/gce/GceComputeServiceMock.java", "status": "modified" }, { "diff": "@@ -0,0 +1,209 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.discovery.gce;\n+\n+import com.sun.net.httpserver.Headers;\n+import com.sun.net.httpserver.HttpExchange;\n+import com.sun.net.httpserver.HttpHandler;\n+import com.sun.net.httpserver.HttpServer;\n+import com.sun.net.httpserver.HttpsConfigurator;\n+import com.sun.net.httpserver.HttpsServer;\n+import org.elasticsearch.common.SuppressForbidden;\n+import org.elasticsearch.common.io.FileSystemUtils;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.plugin.cloud.gce.CloudGcePlugin;\n+import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n+\n+import javax.net.ssl.KeyManagerFactory;\n+import javax.net.ssl.SSLContext;\n+import javax.net.ssl.TrustManagerFactory;\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.io.OutputStream;\n+import java.net.InetAddress;\n+import java.net.InetSocketAddress;\n+import java.nio.charset.StandardCharsets;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.security.KeyStore;\n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoTimeout;\n+\n+\n+@ESIntegTestCase.SuppressLocalMode\n+@ESIntegTestCase.ClusterScope(numDataNodes = 2, numClientNodes = 0)\n+@SuppressForbidden(reason = \"use http server\")\n+// TODO this should be a IT but currently all ITs in this project run against a real cluster\n+public class GceDiscoverTests extends ESIntegTestCase {\n+ private static HttpsServer httpsServer;\n+ private static HttpServer httpServer;\n+ private static Path logDir;\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return pluginList(CloudGcePlugin.class);\n+ }\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ Path resolve = logDir.resolve(Integer.toString(nodeOrdinal));\n+ try {\n+ Files.createDirectory(resolve);\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n+ }\n+ return Settings.builder().put(super.nodeSettings(nodeOrdinal))\n+ .put(\"discovery.type\", \"gce\")\n+ .put(\"path.logs\", resolve)\n+ .put(\"transport.tcp.port\", 0)\n+ .put(\"node.portsfile\", \"true\")\n+ .put(\"cloud.gce.project_id\", \"testproject\")\n+ .put(\"cloud.gce.zone\", \"primaryzone\")\n+ .put(\"discovery.initial_state_timeout\", \"1s\")\n+ .put(\"cloud.gce.host\", \"http://\" + httpServer.getAddress().getHostName() + \":\" + httpServer.getAddress().getPort())\n+ .put(\"cloud.gce.root_url\", \"https://\" + httpsServer.getAddress().getHostName() +\n+ \":\" + httpsServer.getAddress().getPort())\n+ // this is annoying but by default the client pulls a static list of trusted CAs\n+ .put(\"cloud.gce.validate_certificates\", false)\n+ .build();\n+ }\n+\n+ @BeforeClass\n+ public static void startHttpd() throws Exception {\n+ logDir = createTempDir();\n+ SSLContext sslContext = getSSLContext();\n+ // we generated the keyfile with 127.0.0.1 - hence the hardcoding\n+ httpsServer = HttpsServer.create(new InetSocketAddress(\"127.0.0.1\", 0), 0);\n+ httpServer = HttpServer.create(new InetSocketAddress(\"127.0.0.1\", 0), 0);\n+ httpsServer.setHttpsConfigurator(new HttpsConfigurator(sslContext));\n+ httpServer.createContext(\"/computeMetadata/v1/instance/service-accounts/default/token\", new AuthHandler());\n+\n+ httpsServer.createContext(\"/compute/v1/projects/testproject/zones/primaryzone/instances\", new InstanceHandler());\n+ httpsServer.start();\n+ httpServer.start();\n+ }\n+\n+ private static SSLContext getSSLContext() throws Exception {\n+ char[] passphrase = \"keypass\".toCharArray();\n+ KeyStore ks = KeyStore.getInstance(\"JKS\");\n+ try (InputStream stream = GceDiscoverTests.class.getResourceAsStream(\"/test-node.jks\")) {\n+ assertNotNull(\"can't find keystore file\", stream);\n+ ks.load(stream, passphrase);\n+ }\n+ KeyManagerFactory kmf = KeyManagerFactory.getInstance(\"SunX509\");\n+ kmf.init(ks, passphrase);\n+ TrustManagerFactory tmf = TrustManagerFactory.getInstance(\"SunX509\");\n+ tmf.init(ks);\n+ SSLContext ssl = SSLContext.getInstance(\"TLS\");\n+ ssl.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null);\n+ return ssl;\n+ }\n+\n+ @AfterClass\n+ public static void stopHttpd() throws IOException {\n+ for (int i = 0; i < internalCluster().size(); i++) {\n+ // shut them all down otherwise we get spammed with connection refused exceptions\n+ internalCluster().stopRandomDataNode();\n+ }\n+ httpsServer.stop(0);\n+ httpServer.stop(0);\n+ httpsServer = null;\n+ httpServer = null;\n+ logDir = null;\n+ }\n+\n+ public void testJoin() throws ExecutionException, InterruptedException {\n+ // only wait for the cluster to form\n+ assertNoTimeout(client().admin().cluster().prepareHealth().setWaitForNodes(Integer.toString(2)).get());\n+ // add one more node and wait for it to join\n+ internalCluster().startDataOnlyNodeAsync().get();\n+ assertNoTimeout(client().admin().cluster().prepareHealth().setWaitForNodes(Integer.toString(3)).get());\n+ }\n+\n+ @SuppressForbidden(reason = \"use http server\")\n+ private static class InstanceHandler implements HttpHandler {\n+ @Override\n+ public void handle(HttpExchange s) throws IOException {\n+ Headers headers = s.getResponseHeaders();\n+ headers.add(\"Content-Type\", \"application/json; charset=UTF-8\");\n+ ESLogger logger = Loggers.getLogger(GceDiscoverTests.class);\n+ try {\n+ Path[] files = FileSystemUtils.files(logDir);\n+ StringBuilder builder = new StringBuilder(\"{\\\"id\\\": \\\"dummy\\\",\\\"items\\\":[\");\n+ int foundFiles = 0;\n+ for (int i = 0; i < files.length; i++) {\n+ Path resolve = files[i].resolve(\"transport.ports\");\n+ if (Files.exists(resolve)) {\n+ if (foundFiles++ > 0) {\n+ builder.append(\",\");\n+ }\n+ List<String> addressses = Files.readAllLines(resolve, StandardCharsets.UTF_8);\n+ Collections.shuffle(addressses, random());\n+ logger.debug(\"addresses for node: [{}] published addresses [{}]\", files[i].getFileName(), addressses);\n+ builder.append(\"{\\\"description\\\": \\\"ES Node \").append(files[i].getFileName())\n+ .append(\"\\\",\\\"networkInterfaces\\\": [ {\");\n+ builder.append(\"\\\"networkIP\\\": \\\"\").append(addressses.get(0)).append(\"\\\"}],\");\n+ builder.append(\"\\\"status\\\" : \\\"RUNNING\\\"}\");\n+ }\n+ }\n+ builder.append(\"]}\");\n+ String responseString = builder.toString();\n+ logger.warn(\"{}\", responseString);\n+ final byte[] responseAsBytes = responseString.getBytes(StandardCharsets.UTF_8);\n+ s.sendResponseHeaders(200, responseAsBytes.length);\n+ OutputStream responseBody = s.getResponseBody();\n+ responseBody.write(responseAsBytes);\n+ responseBody.close();\n+ } catch (Exception e) {\n+ //\n+ byte[] responseAsBytes = (\"{ \\\"error\\\" : {\\\"message\\\" : \\\"\" + e.toString() + \"\\\" } }\").getBytes(StandardCharsets.UTF_8);\n+ s.sendResponseHeaders(500, responseAsBytes.length);\n+ OutputStream responseBody = s.getResponseBody();\n+ responseBody.write(responseAsBytes);\n+ responseBody.close();\n+ }\n+\n+\n+ }\n+ }\n+\n+ @SuppressForbidden(reason = \"use http server\")\n+ private static class AuthHandler implements HttpHandler {\n+ @Override\n+ public void handle(HttpExchange s) throws IOException {\n+ String response = GceComputeServiceMock.readGoogleInternalJsonResponse(\n+ \"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token\");\n+ byte[] responseAsBytes = response.getBytes(StandardCharsets.UTF_8);\n+ s.sendResponseHeaders(200, responseAsBytes.length);\n+ OutputStream responseBody = s.getResponseBody();\n+ responseBody.write(responseAsBytes);\n+ responseBody.close();\n+ }\n+ }\n+}", "filename": "plugins/cloud-gce/src/test/java/org/elasticsearch/discovery/gce/GceDiscoverTests.java", "status": "added" } ] }
{ "body": "Get complaints about java permissions but adding them doesn't seem to fix...\n\n[2016-02-05 11:39:42,346][WARN ][discovery.gce ] [homer] Exception caught during discovery: access denied (\"java.lang.RuntimePermission\" \"setFactory\") java.security.AccessControlException: access denied (\"java.lang.RuntimePermission\" \"setFactory\") at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372) at java.security.AccessController.checkPermission(AccessController.java:559) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkSetFactory(SecurityManager.java:1625) at javax.net.ssl.HttpsURLConnection.setSSLSocketFactory(HttpsURLConnection.java:362) at com.google.api.client.http.javanet.NetHttpTransport.buildRequest(NetHttpTransport.java:145) at com.google.api.client.http.javanet.NetHttpTransport.buildRequest(NetHttpTransport.java:62) at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:863) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469) at org.elasticsearch.cloud.gce.GceComputeServiceImpl$1$1.run(GceComputeServiceImpl.java:94) at org.elasticsearch.cloud.gce.GceComputeServiceImpl$1$1.run(GceComputeServiceImpl.java:90) at java.security.AccessController.doPrivileged(Native Method) \n\nGlad to provide any information needed to help troubleshoot.\n\nI answered 'y' (yes) to the prompt to add security options to java, also attempted to add them manually. Still no luck. Uninstalled and reinstalled the plugin...\n\nI rolled back to 2.1.1 and it works again...\n", "comments": [ { "body": "Same for me :+1: \n", "created_at": "2016-02-08T11:34:46Z" }, { "body": "The cloud-gce plugin works with elasticsearch 2.2 as long as no other plugins are installed. Then this is causing the following error : \n*\\* Exception caught during discovery: access denied (\"java.lang.RuntimePermission\" \"setFactory\")**\n", "created_at": "2016-02-13T00:33:28Z" }, { "body": "@jasontedor could you take a look at this please?\n", "created_at": "2016-02-13T12:20:29Z" }, { "body": "> Get complaints about java permissions but adding them doesn't seem to fix...\n\n@phutchins Where did you add the `setFactory` permission?\n", "created_at": "2016-02-13T12:24:16Z" }, { "body": "@jasontedor I tried the server.policy file and I believe one other place that I can't recall at the moment...\n", "created_at": "2016-02-13T15:41:20Z" }, { "body": "> I tried the server.policy file and I believe one other place that I can't recall at the moment...\n\n@phutchins Would you be willing to try adding it to `${path.plugins}/cloud-gce/plugin-security.policy` as I think that that is where you'll need to add it (depends on where your plugins are installed)? Can you add\n\n```\npermission java.lang.RuntimePermission \"setFactory\";\n```\n\nto the `grant` entries there? If you bump into any other permissions that are needed, add them there too (reading through the [code](https://github.com/google/google-http-java-client/blob/1.20.0/google-http-client/src/main/java/com/google/api/client/http/javanet/NetHttpTransport.java) and the [docs](https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/HttpsURLConnection.html), another one that _might_ come up is `permission javax.net.ssl.SSLPermission \"setHostnameVerifier\";`). Please note that adding any additional security permissions is at your own risk. You can read about the [`setFactory` runtime permission](http://download.java.net/jdk7/archive/b123/docs/api/java/lang/RuntimePermission.html) and the _possible_ [`setHostNameVerifier` SSL permission](https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLPermission.html) but if you report them back here we will do a thorough investigation on them in advance of the next patch release.\n", "created_at": "2016-02-13T17:33:02Z" }, { "body": "@phutchins I'd like to make sure this can be addressed for a maintenance release. Have you had the opportunity to try the above settings?\n", "created_at": "2016-02-25T15:02:53Z" }, { "body": "Hey @jasontedor, I've not yet but will as soon as I get a chance. Hope to get back to you soon.\n", "created_at": "2016-02-26T18:09:55Z" } ], "number": 16485, "title": "Plugin does not work with ES 2.2.0" }
{ "body": "This commit adds a missing permission and a simple test that\nensures we discover other nodes via a mock http endpoint.\n\nCloses #16485 \n\n@rjernst I had to make this integ test a ordinary testcase since it runs against an internal cluster. I didn't explore the test fixture path yet since it also requires a test plugin etc. I wonder if we can make this one test run against the internal cluster while others runs against external clusters? I am not super happy with the test but it's a major step forward since it really tests the integration of this plugins while everything else isn't really testing it. It also would have caught the security issue though\n", "number": 16860, "review_comments": [ { "body": "Why this import?\n", "created_at": "2016-02-29T14:46:54Z" } ], "title": "Add setFactory permission to GceDiscoveryPlugin" }
{ "commits": [ { "message": "Add setFactory permission to GceDiscoveryPlugin\n\nThis commit adds a missing permission and a simple test that\nensures we discover other nodes via a mock http endpoint.\n\nCloses #16485" }, { "message": "Move keystore creation to gradle - this prevents committing a keystore to the source repo" } ], "files": [ { "diff": "@@ -1,3 +1,4 @@\n+import org.elasticsearch.gradle.LoggedExec\n \n esplugin {\n description 'The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism.'\n@@ -21,6 +22,36 @@ dependencies {\n compile \"commons-codec:commons-codec:${versions.commonscodec}\"\n }\n \n+\n+// needed to be consistent with ssl host checking\n+String host = InetAddress.getLoopbackAddress().getHostAddress();\n+\n+// location of keystore and files to generate it\n+File keystore = new File(project.buildDir, 'keystore/test-node.jks')\n+\n+// generate the keystore\n+task createKey(type: LoggedExec) {\n+ doFirst {\n+ project.delete(keystore.parentFile)\n+ keystore.parentFile.mkdirs()\n+ }\n+ executable = 'keytool'\n+ standardInput = new ByteArrayInputStream('FirstName LastName\\nUnit\\nOrganization\\nCity\\nState\\nNL\\nyes\\n\\n'.getBytes('UTF-8'))\n+ args '-genkey',\n+ '-alias', 'test-node',\n+ '-keystore', keystore,\n+ '-keyalg', 'RSA',\n+ '-keysize', '2048',\n+ '-validity', '712',\n+ '-dname', 'CN=' + host,\n+ '-keypass', 'keypass',\n+ '-storepass', 'keypass'\n+}\n+\n+// add keystore to test classpath: it expects it there\n+sourceSets.test.resources.srcDir(keystore.parentFile)\n+processTestResources.dependsOn(createKey)\n+\n dependencyLicenses {\n mapping from: /google-.*/, to: 'google'\n }", "filename": "plugins/discovery-gce/build.gradle", "status": "modified" }, { "diff": "@@ -22,10 +22,8 @@\n import com.google.api.services.compute.model.Instance;\n import org.elasticsearch.common.component.LifecycleComponent;\n import org.elasticsearch.common.settings.Setting;\n-import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n \n-import java.util.Arrays;\n import java.io.IOException;\n import java.util.Collection;\n import java.util.Collections;", "filename": "plugins/discovery-gce/src/main/java/org/elasticsearch/cloud/gce/GceComputeService.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import com.google.api.client.http.HttpHeaders;\n import com.google.api.client.http.HttpResponse;\n import com.google.api.client.http.HttpTransport;\n+import com.google.api.client.http.javanet.NetHttpTransport;\n import com.google.api.client.json.JsonFactory;\n import com.google.api.client.json.jackson2.JacksonFactory;\n import com.google.api.services.compute.Compute;\n@@ -36,12 +37,14 @@\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.network.NetworkService;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.discovery.gce.RetryHttpInitializerWrapper;\n \n import java.io.IOException;\n import java.net.URL;\n+import java.nio.file.Files;\n import java.security.AccessController;\n import java.security.GeneralSecurityException;\n import java.security.PrivilegedAction;\n@@ -51,18 +54,29 @@\n import java.util.Collection;\n import java.util.Collections;\n import java.util.List;\n+import java.util.function.Function;\n \n public class GceComputeServiceImpl extends AbstractLifecycleComponent<GceComputeService>\n implements GceComputeService {\n \n+ // all settings just used for testing - not registered by default\n+ public static final Setting<Boolean> GCE_VALIDATE_CERTIFICATES =\n+ Setting.boolSetting(\"cloud.gce.validate_certificates\", true, false, Setting.Scope.CLUSTER);\n+ public static final Setting<String> GCE_HOST =\n+ new Setting<>(\"cloud.gce.host\", \"http://metadata.google.internal\", Function.identity(), false, Setting.Scope.CLUSTER);\n+ public static final Setting<String> GCE_ROOT_URL =\n+ new Setting<>(\"cloud.gce.root_url\", \"https://www.googleapis.com\", Function.identity(), false, Setting.Scope.CLUSTER);\n+\n private final String project;\n private final List<String> zones;\n-\n // Forcing Google Token API URL as set in GCE SDK to\n // http://metadata/computeMetadata/v1/instance/service-accounts/default/token\n // See https://developers.google.com/compute/docs/metadata#metadataserver\n- public static final String GCE_METADATA_URL = \"http://metadata.google.internal/computeMetadata/v1/instance\";\n- public static final String TOKEN_SERVER_ENCODED_URL = GCE_METADATA_URL + \"/service-accounts/default/token\";\n+ private final String gceHost;\n+ private final String metaDataUrl;\n+ private final String tokenServerEncodedUrl;\n+ private String gceRootUrl;\n+\n \n @Override\n public Collection<Instance> instances() {\n@@ -85,7 +99,7 @@ public InstanceList run() throws Exception {\n // assist type inference\n return instanceList.isEmpty() ? Collections.<Instance>emptyList() : instanceList.getItems();\n } catch (PrivilegedActionException e) {\n- logger.warn(\"Problem fetching instance list for zone {}\", zoneId);\n+ logger.warn(\"Problem fetching instance list for zone {}\", e, zoneId);\n logger.debug(\"Full exception:\", e);\n // assist type inference\n return Collections.<Instance>emptyList();\n@@ -104,7 +118,7 @@ public InstanceList run() throws Exception {\n \n @Override\n public String metadata(String metadataPath) throws IOException {\n- String urlMetadataNetwork = GCE_METADATA_URL + \"/\" + metadataPath;\n+ String urlMetadataNetwork = this.metaDataUrl + \"/\" + metadataPath;\n logger.debug(\"get metadata from [{}]\", urlMetadataNetwork);\n final URL url = new URL(urlMetadataNetwork);\n HttpHeaders headers;\n@@ -153,17 +167,28 @@ public GenericUrl run() {\n /** Global instance of the JSON factory. */\n private JsonFactory gceJsonFactory;\n \n+ private final boolean validateCerts;\n @Inject\n public GceComputeServiceImpl(Settings settings, NetworkService networkService) {\n super(settings);\n this.project = PROJECT_SETTING.get(settings);\n this.zones = ZONE_SETTING.get(settings);\n+ this.gceHost = GCE_HOST.get(settings);\n+ this.metaDataUrl = gceHost + \"/computeMetadata/v1/instance\";\n+ this.gceRootUrl = GCE_ROOT_URL.get(settings);\n+ tokenServerEncodedUrl = metaDataUrl + \"/service-accounts/default/token\";\n+ this.validateCerts = GCE_VALIDATE_CERTIFICATES.get(settings);\n networkService.addCustomNameResolver(new GceNameResolver(settings, this));\n }\n \n protected synchronized HttpTransport getGceHttpTransport() throws GeneralSecurityException, IOException {\n if (gceHttpTransport == null) {\n- gceHttpTransport = GoogleNetHttpTransport.newTrustedTransport();\n+ if (validateCerts) {\n+ gceHttpTransport = GoogleNetHttpTransport.newTrustedTransport();\n+ } else {\n+ // this is only used for testing - alternative we could use the defaul keystore but this requires special configs too..\n+ gceHttpTransport = new NetHttpTransport.Builder().doNotValidateCertificate().build();\n+ }\n }\n return gceHttpTransport;\n }\n@@ -183,7 +208,7 @@ public synchronized Compute client() {\n \n logger.info(\"starting GCE discovery service\");\n ComputeCredential credential = new ComputeCredential.Builder(getGceHttpTransport(), gceJsonFactory)\n- .setTokenServerEncodedUrl(TOKEN_SERVER_ENCODED_URL)\n+ .setTokenServerEncodedUrl(this.tokenServerEncodedUrl)\n .build();\n \n // hack around code messiness in GCE code\n@@ -205,7 +230,9 @@ public Void run() throws IOException {\n refreshInterval = TimeValue.timeValueSeconds(credential.getExpiresInSeconds() - 1);\n }\n \n- Compute.Builder builder = new Compute.Builder(getGceHttpTransport(), gceJsonFactory, null).setApplicationName(VERSION);\n+\n+ Compute.Builder builder = new Compute.Builder(getGceHttpTransport(), gceJsonFactory, null).setApplicationName(VERSION)\n+ .setRootUrl(gceRootUrl);\n \n if (RETRY_SETTING.exists(settings)) {\n TimeValue maxWait = MAX_WAIT_SETTING.get(settings);", "filename": "plugins/discovery-gce/src/main/java/org/elasticsearch/cloud/gce/GceComputeServiceImpl.java", "status": "modified" }, { "diff": "@@ -20,5 +20,6 @@\n grant {\n // needed because of problems in gce\n permission java.lang.RuntimePermission \"accessDeclaredMembers\";\n+ permission java.lang.RuntimePermission \"setFactory\";\n permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\n };", "filename": "plugins/discovery-gce/src/main/plugin-metadata/plugin-security.policy", "status": "modified" }, { "diff": "@@ -55,6 +55,8 @@ protected HttpTransport getGceHttpTransport() throws GeneralSecurityException, I\n return this.mockHttpTransport;\n }\n \n+ public static final String GCE_METADATA_URL = \"http://metadata.google.internal/computeMetadata/v1/instance\";\n+\n protected HttpTransport configureMock() {\n return new MockHttpTransport() {\n @Override\n@@ -80,19 +82,18 @@ public LowLevelHttpResponse execute() throws IOException {\n };\n }\n \n- private String readGoogleInternalJsonResponse(String url) throws IOException {\n+ public static String readGoogleInternalJsonResponse(String url) throws IOException {\n return readJsonResponse(url, \"http://metadata.google.internal/\");\n }\n \n- private String readGoogleApiJsonResponse(String url) throws IOException {\n+ public static String readGoogleApiJsonResponse(String url) throws IOException {\n return readJsonResponse(url, \"https://www.googleapis.com/\");\n }\n \n- private String readJsonResponse(String url, String urlRoot) throws IOException {\n+ private static String readJsonResponse(String url, String urlRoot) throws IOException {\n // We extract from the url the mock file path we want to use\n String mockFileName = Strings.replace(url, urlRoot, \"\");\n \n- logger.debug(\"--> read mock file from [{}]\", mockFileName);\n URL resource = GceComputeServiceMock.class.getResource(mockFileName);\n if (resource == null) {\n throw new IOException(\"can't read [\" + url + \"] in src/test/resources/org/elasticsearch/discovery/gce\");\n@@ -106,7 +107,6 @@ public void handle(String s) {\n }\n });\n String response = sb.toString();\n- logger.trace(\"{}\", response);\n return response;\n }\n }", "filename": "plugins/discovery-gce/src/test/java/org/elasticsearch/discovery/gce/GceComputeServiceMock.java", "status": "modified" }, { "diff": "@@ -0,0 +1,215 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.discovery.gce;\n+\n+import com.sun.net.httpserver.Headers;\n+import com.sun.net.httpserver.HttpServer;\n+import com.sun.net.httpserver.HttpsConfigurator;\n+import com.sun.net.httpserver.HttpsServer;\n+import org.elasticsearch.cloud.gce.GceComputeServiceImpl;\n+import org.elasticsearch.common.SuppressForbidden;\n+import org.elasticsearch.common.io.FileSystemUtils;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.settings.SettingsModule;\n+import org.elasticsearch.plugin.discovery.gce.GceDiscoveryPlugin;\n+import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n+\n+import javax.net.ssl.KeyManagerFactory;\n+import javax.net.ssl.SSLContext;\n+import javax.net.ssl.TrustManagerFactory;\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.io.OutputStream;\n+import java.net.InetAddress;\n+import java.net.InetSocketAddress;\n+import java.nio.charset.StandardCharsets;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.security.KeyStore;\n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoTimeout;\n+\n+\n+@ESIntegTestCase.SuppressLocalMode\n+@ESIntegTestCase.ClusterScope(numDataNodes = 2, numClientNodes = 0)\n+@SuppressForbidden(reason = \"use http server\")\n+// TODO this should be a IT but currently all ITs in this project run against a real cluster\n+public class GceDiscoverTests extends ESIntegTestCase {\n+\n+ public static class TestPlugin extends Plugin {\n+\n+ @Override\n+ public String name() {\n+ return \"GceDiscoverTests\";\n+ }\n+\n+ @Override\n+ public String description() {\n+ return \"GceDiscoverTests\";\n+ }\n+\n+ public void onModule(SettingsModule module) {\n+ module.registerSetting(GceComputeServiceImpl.GCE_HOST);\n+ module.registerSetting(GceComputeServiceImpl.GCE_ROOT_URL);\n+ module.registerSetting(GceComputeServiceImpl.GCE_VALIDATE_CERTIFICATES);\n+ }\n+ }\n+\n+ private static HttpsServer httpsServer;\n+ private static HttpServer httpServer;\n+ private static Path logDir;\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return pluginList(GceDiscoveryPlugin.class, TestPlugin.class);\n+ }\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ Path resolve = logDir.resolve(Integer.toString(nodeOrdinal));\n+ try {\n+ Files.createDirectory(resolve);\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n+ }\n+ return Settings.builder().put(super.nodeSettings(nodeOrdinal))\n+ .put(\"discovery.type\", \"gce\")\n+ .put(\"path.logs\", resolve)\n+ .put(\"transport.tcp.port\", 0)\n+ .put(\"node.portsfile\", \"true\")\n+ .put(\"cloud.gce.project_id\", \"testproject\")\n+ .put(\"cloud.gce.zone\", \"primaryzone\")\n+ .put(\"discovery.initial_state_timeout\", \"1s\")\n+ .put(\"cloud.gce.host\", \"http://\" + httpServer.getAddress().getHostName() + \":\" + httpServer.getAddress().getPort())\n+ .put(\"cloud.gce.root_url\", \"https://\" + httpsServer.getAddress().getHostName() +\n+ \":\" + httpsServer.getAddress().getPort())\n+ // this is annoying but by default the client pulls a static list of trusted CAs\n+ .put(\"cloud.gce.validate_certificates\", false)\n+ .build();\n+ }\n+\n+ @BeforeClass\n+ public static void startHttpd() throws Exception {\n+ logDir = createTempDir();\n+ SSLContext sslContext = getSSLContext();\n+ httpsServer = HttpsServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress().getHostAddress(), 0), 0);\n+ httpServer = HttpServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress().getHostAddress(), 0), 0);\n+ httpsServer.setHttpsConfigurator(new HttpsConfigurator(sslContext));\n+ httpServer.createContext(\"/computeMetadata/v1/instance/service-accounts/default/token\", (s) -> {\n+ String response = GceComputeServiceMock.readGoogleInternalJsonResponse(\n+ \"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token\");\n+ byte[] responseAsBytes = response.getBytes(StandardCharsets.UTF_8);\n+ s.sendResponseHeaders(200, responseAsBytes.length);\n+ OutputStream responseBody = s.getResponseBody();\n+ responseBody.write(responseAsBytes);\n+ responseBody.close();\n+ });\n+\n+ httpsServer.createContext(\"/compute/v1/projects/testproject/zones/primaryzone/instances\", (s) -> {\n+ Headers headers = s.getResponseHeaders();\n+ headers.add(\"Content-Type\", \"application/json; charset=UTF-8\");\n+ ESLogger logger = Loggers.getLogger(GceDiscoverTests.class);\n+ try {\n+ Path[] files = FileSystemUtils.files(logDir);\n+ StringBuilder builder = new StringBuilder(\"{\\\"id\\\": \\\"dummy\\\",\\\"items\\\":[\");\n+ int foundFiles = 0;\n+ for (int i = 0; i < files.length; i++) {\n+ Path resolve = files[i].resolve(\"transport.ports\");\n+ if (Files.exists(resolve)) {\n+ if (foundFiles++ > 0) {\n+ builder.append(\",\");\n+ }\n+ List<String> addressses = Files.readAllLines(resolve);\n+ Collections.shuffle(addressses, random());\n+ logger.debug(\"addresses for node: [{}] published addresses [{}]\", files[i].getFileName(), addressses);\n+ builder.append(\"{\\\"description\\\": \\\"ES Node \").append(files[i].getFileName())\n+ .append(\"\\\",\\\"networkInterfaces\\\": [ {\");\n+ builder.append(\"\\\"networkIP\\\": \\\"\").append(addressses.get(0)).append(\"\\\"}],\");\n+ builder.append(\"\\\"status\\\" : \\\"RUNNING\\\"}\");\n+ }\n+ }\n+ builder.append(\"]}\");\n+ String responseString = builder.toString();\n+ final byte[] responseAsBytes = responseString.getBytes(StandardCharsets.UTF_8);\n+ s.sendResponseHeaders(200, responseAsBytes.length);\n+ OutputStream responseBody = s.getResponseBody();\n+ responseBody.write(responseAsBytes);\n+ responseBody.close();\n+ } catch (Exception e) {\n+ //\n+ byte[] responseAsBytes = (\"{ \\\"error\\\" : {\\\"message\\\" : \\\"\" + e.toString() + \"\\\" } }\").getBytes(StandardCharsets.UTF_8);\n+ s.sendResponseHeaders(500, responseAsBytes.length);\n+ OutputStream responseBody = s.getResponseBody();\n+ responseBody.write(responseAsBytes);\n+ responseBody.close();\n+ }\n+\n+\n+ });\n+ httpsServer.start();\n+ httpServer.start();\n+ }\n+\n+ private static SSLContext getSSLContext() throws Exception{\n+ char[] passphrase = \"keypass\".toCharArray();\n+ KeyStore ks = KeyStore.getInstance(\"JKS\");\n+ try (InputStream stream = GceDiscoverTests.class.getResourceAsStream(\"/test-node.jks\")) {\n+ assertNotNull(\"can't find keystore file\", stream);\n+ ks.load(stream, passphrase);\n+ }\n+ KeyManagerFactory kmf = KeyManagerFactory.getInstance(\"SunX509\");\n+ kmf.init(ks, passphrase);\n+ TrustManagerFactory tmf = TrustManagerFactory.getInstance(\"SunX509\");\n+ tmf.init(ks);\n+ SSLContext ssl = SSLContext.getInstance(\"TLS\");\n+ ssl.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null);\n+ return ssl;\n+ }\n+\n+ @AfterClass\n+ public static void stopHttpd() throws IOException {\n+ for (int i = 0; i < internalCluster().size(); i++) {\n+ // shut them all down otherwise we get spammed with connection refused exceptions\n+ internalCluster().stopRandomDataNode();\n+ }\n+ httpsServer.stop(0);\n+ httpServer.stop(0);\n+ httpsServer = null;\n+ httpServer = null;\n+ logDir = null;\n+ }\n+\n+ public void testJoin() throws ExecutionException, InterruptedException {\n+ // only wait for the cluster to form\n+ assertNoTimeout(client().admin().cluster().prepareHealth().setWaitForNodes(Integer.toString(2)).get());\n+ // add one more node and wait for it to join\n+ internalCluster().startDataOnlyNodeAsync().get();\n+ assertNoTimeout(client().admin().cluster().prepareHealth().setWaitForNodes(Integer.toString(3)).get());\n+ }\n+}", "filename": "plugins/discovery-gce/src/test/java/org/elasticsearch/discovery/gce/GceDiscoverTests.java", "status": "added" } ] }
{ "body": "Currently, a data node deletes indices by evaluating the cluster state. If a new cluster state comes in it is compared to the last known cluster state, and if the new state does not contain an index that the node has in its last cluster state, then this index is deleted.\n\nThis could cause data to be deleted if the data folder of all master nodes was lost (#8823): \n\nAll master nodes of a cluster go down at the same time and their data folders cannot be recovered. \nA new master is brought up but it does not have any indices in its cluster state because the data was lost.\nBecause all other node are data nodes it cannot get the cluster state from them too and therefore sends a cluster state without any indices in it to the data nodes. The data nodes then delete all their data. \n\nOn the master branch we prevent this now by checking if the current cluster state comes from a different master than the previous one and if so, we keep the indices and import them as dangling (see #9952, ClusterChangedEvent).\n\nWhile this prevents the deletion, it also means that we might in other cases not delete indices although we should.\n\nExample:\n1. two masters eligible nodes, m1 is master, one data node (d).\n2. m1, m2 and d are on cluster state version 1 that contains and index\n3. The index is deleted through the API, causing m1 to send cluster state 2 which does not contain the index to m2 and d that should trigger the actual index deletion.\n4. m1 goes down\n5. m2 receives the new cluster state but d does not (network issues etc)\n6. m2 is elected master and sends cluster state 3 to d which again does not contain the index\n7. d will not delete the index because the state comes from a different master than cluster state 1 (the last one it knows of) and will therefore not delete the index and instead import it back into the cluster \n\nCurrently there is no way for a data node to decide if an index should actually be deleted or not if the cluster state that triggers the delete comes from a new master. We chose between: (1) deleting all data in case a node receives an empty cluster state or (2) run the risk to keep indices around that should actually be deleted.\n\nWe decided for (2) in #9952. Just opening this issue so that this behavior is documented.\n", "comments": [ { "body": "@brwe what about making the delete index request wait for responses from the data nodes? then the request can report success/failure?\n", "created_at": "2015-06-15T10:00:43Z" }, { "body": "@clintongormley the delete index API does wait for data nodes to confirm the deletion. The above scenario will trigger the call to time out (it waits for an ack from the data node that will not come). If people then check the CS, they will see that the index was deleted. However, at a later stage, once the data rejoins the cluster and the new master, the index will be reimported.\n", "created_at": "2015-06-15T11:39:50Z" }, { "body": "Ok understood. +1\n", "created_at": "2015-06-15T11:43:19Z" }, { "body": "@bleskes is this still an issue?\n", "created_at": "2016-01-18T20:28:13Z" }, { "body": "Sadly it is. However, thinking about it again I realized that we can easily detect the “new empty” master danger by comparing cluster uuid - a new master will generate a new one. Agreed with marking as adopt me. Although it sounds scary it’s quite an easy fix and is a good entry point to the cluster state universe. If anyone wants to pick this up, please ping me :)\n\n> On 18 Jan 2016, at 21:28, Clinton Gormley notifications@github.com wrote:\n> \n> @bleskes is this still an issue?\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "created_at": "2016-01-19T10:54:08Z" } ], "number": 11665, "title": "Concurrent deletion of indices and master failure can cause indices to be reimported" }
{ "body": "If a node was isolated from the cluster while a delete was happening, the node will ignore the deleted operation when rejoining as we couldn't detect whether the new master genuinely deleted the indices or it is a new fresh \"reset\" master that was started without the old data folder. We can now be smarter and detect these reset masters and actually delete the indices on the node if its not the case of a reset master.\n\nNote that this new protection doesn't hold if the node was shut down. In that case it's indices will still be imported as dangling indices.\n\nCloses #11665\n", "number": 16825, "review_comments": [ { "body": "Please don't remove this method here, it would make this a breaking change in that it would break any plugins that relied on watching for ClusterChangeEvents where the blocks have changed\n", "created_at": "2016-02-26T18:11:32Z" }, { "body": "same here, why did you remove `nodesAdded` and `nodesChanged`?\n", "created_at": "2016-02-26T18:12:09Z" }, { "body": "Good point, I only saw they weren't used in ES core so removed them to cleanup, but I'll put them back in (or verify plugins don't use them either, but it is probably not worth doing as part of this commit).\n", "created_at": "2016-02-26T18:15:29Z" }, { "body": "can we add a note saying this is an reference level equality and not a true equal.\n", "created_at": "2016-02-29T13:42:31Z" }, { "body": "same here re type of equality.\n", "created_at": "2016-02-29T13:42:56Z" }, { "body": "I think we can fold this test into one test and rarely change the cluster uuid?\n", "created_at": "2016-02-29T13:49:06Z" }, { "body": "can we test both node addition and removal (in one cluster state)?\n", "created_at": "2016-02-29T13:49:44Z" }, { "body": "Done, I also added that note to every other reference equality check in the class \n", "created_at": "2016-02-29T15:21:30Z" }, { "body": "Done, this method actually does also check node addition as well, I just didn't name it well because the `ClusterChangedEvent` only contains a `nodesRemoved` method, not a `nodesAdded` method. So I changed the test method's name. \n", "created_at": "2016-02-29T15:24:11Z" }, { "body": "Does it checks the case where nodes are added and removed in one cluster state update? \n\nAlso I'm confused - there is a `org.elasticsearch.cluster.ClusterChangedEvent#nodesAdded` ?\n", "created_at": "2016-02-29T15:30:37Z" }, { "body": "@bleskes I'm not clear on what is meant here? The `testMetaDataChangesOnNoMasterChange` and `testMetaDataChangesOnNewClusterUUID` tests each use this helper method as the test is essentially the same except for whether we change the cluster UUID or not. I figured we needed a test to check metadata changes when the cluster UUID is changed and one when it is not.\n", "created_at": "2016-02-29T15:32:32Z" }, { "body": "@bleskes Sorry you are correct, there is a `nodesAdded`. What happened was, I originally took it out because I saw it was not used anywhere in core, until @dakrone pointed out that they could be used by plugins and shouldn't be removed. So I will add a test in here for `nodesAdded` as well (all part of the same test) and include a scenario where nodes are both added and removed in the same cluster state update.\n", "created_at": "2016-02-29T15:34:47Z" }, { "body": "Also, I'll include a test for the `nodesChanged` method (as part of the same test).\n", "created_at": "2016-02-29T15:38:23Z" } ], "title": "Index deletes not applied when cluster UUID has changed" }
{ "commits": [ { "message": "Index deletes not applied when cluster UUID has changed\n\nIf a node was isolated from the cluster while a delete was happening,\nthe node will ignore the deleted operation when rejoining as we couldn't\ndetect whether the new master genuinely deleted the indices or it is a\nnew fresh \"reset\" master that was started without the old data folder.\nWe can now be smarter and detect these reset masters and actually delete\nthe indices on the node if its not the case of a reset master.\n\nNote that this new protection doesn't hold if the node was shut down. In\nthat case it's indices will still be imported as dangling indices.\n\nCloses #11665" } ], "files": [ { "diff": "@@ -25,12 +25,12 @@\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n \n import java.util.ArrayList;\n-import java.util.Arrays;\n import java.util.Collections;\n import java.util.List;\n+import java.util.Objects;\n \n /**\n- *\n+ * An event received by the local node, signaling that the cluster state has changed.\n */\n public class ClusterChangedEvent {\n \n@@ -43,6 +43,9 @@ public class ClusterChangedEvent {\n private final DiscoveryNodes.Delta nodesDelta;\n \n public ClusterChangedEvent(String source, ClusterState state, ClusterState previousState) {\n+ Objects.requireNonNull(source, \"source must not be null\");\n+ Objects.requireNonNull(state, \"state must not be null\");\n+ Objects.requireNonNull(previousState, \"previousState must not be null\");\n this.source = source;\n this.state = state;\n this.previousState = previousState;\n@@ -56,19 +59,35 @@ public String source() {\n return this.source;\n }\n \n+ /**\n+ * The new cluster state that caused this change event.\n+ */\n public ClusterState state() {\n return this.state;\n }\n \n+ /**\n+ * The previous cluster state for this change event.\n+ */\n public ClusterState previousState() {\n return this.previousState;\n }\n \n+ /**\n+ * Returns <code>true</code> iff the routing tables (for all indices) have\n+ * changed between the previous cluster state and the current cluster state.\n+ * Note that this is an object reference equality test, not an equals test.\n+ */\n public boolean routingTableChanged() {\n return state.routingTable() != previousState.routingTable();\n }\n \n+ /**\n+ * Returns <code>true</code> iff the routing table has changed for the given index.\n+ * Note that this is an object reference equality test, not an equals test.\n+ */\n public boolean indexRoutingTableChanged(String index) {\n+ Objects.requireNonNull(index, \"index must not be null\");\n if (!state.routingTable().hasIndex(index) && !previousState.routingTable().hasIndex(index)) {\n return false;\n }\n@@ -82,9 +101,6 @@ public boolean indexRoutingTableChanged(String index) {\n * Returns the indices created in this event\n */\n public List<String> indicesCreated() {\n- if (previousState == null) {\n- return Arrays.asList(state.metaData().indices().keys().toArray(String.class));\n- }\n if (!metaDataChanged()) {\n return Collections.emptyList();\n }\n@@ -105,20 +121,14 @@ public List<String> indicesCreated() {\n * Returns the indices deleted in this event\n */\n public List<String> indicesDeleted() {\n-\n- // if the new cluster state has a new master then we cannot know if an index which is not in the cluster state\n- // is actually supposed to be deleted or imported as dangling instead. for example a new master might not have\n- // the index in its cluster state because it was started with an empty data folder and in this case we want to\n- // import as dangling. we check here for new master too to be on the safe side in this case.\n- // This means that under certain conditions deleted indices might be reimported if a master fails while the deletion\n- // request is issued and a node receives the cluster state that would trigger the deletion from the new master.\n- // See test MetaDataWriteDataNodesTests.testIndicesDeleted()\n+ // If the new cluster state has a new cluster UUID, the likely scenario is that a node was elected\n+ // master that has had its data directory wiped out, in which case we don't want to delete the indices and lose data;\n+ // rather we want to import them as dangling indices instead. So we check here if the cluster UUID differs from the previous\n+ // cluster UUID, in which case, we don't want to delete indices that the master erroneously believes shouldn't exist.\n+ // See test DiscoveryWithServiceDisruptionsIT.testIndicesDeleted()\n // See discussion on https://github.com/elastic/elasticsearch/pull/9952 and\n // https://github.com/elastic/elasticsearch/issues/11665\n- if (hasNewMaster() || previousState == null) {\n- return Collections.emptyList();\n- }\n- if (!metaDataChanged()) {\n+ if (metaDataChanged() == false || isNewCluster()) {\n return Collections.emptyList();\n }\n List<String> deleted = null;\n@@ -134,10 +144,20 @@ public List<String> indicesDeleted() {\n return deleted == null ? Collections.<String>emptyList() : deleted;\n }\n \n+ /**\n+ * Returns <code>true</code> iff the metadata for the cluster has changed between\n+ * the previous cluster state and the new cluster state. Note that this is an object\n+ * reference equality test, not an equals test.\n+ */\n public boolean metaDataChanged() {\n return state.metaData() != previousState.metaData();\n }\n \n+ /**\n+ * Returns <code>true</code> iff the {@link IndexMetaData} for a given index\n+ * has changed between the previous cluster state and the new cluster state.\n+ * Note that this is an object reference equality test, not an equals test.\n+ */\n public boolean indexMetaDataChanged(IndexMetaData current) {\n MetaData previousMetaData = previousState.metaData();\n if (previousMetaData == null) {\n@@ -152,46 +172,56 @@ public boolean indexMetaDataChanged(IndexMetaData current) {\n return true;\n }\n \n+ /**\n+ * Returns <code>true</code> iff the cluster level blocks have changed between cluster states.\n+ * Note that this is an object reference equality test, not an equals test.\n+ */\n public boolean blocksChanged() {\n return state.blocks() != previousState.blocks();\n }\n \n+ /**\n+ * Returns <code>true</code> iff the local node is the mater node of the cluster.\n+ */\n public boolean localNodeMaster() {\n return state.nodes().localNodeMaster();\n }\n \n+ /**\n+ * Returns the {@link org.elasticsearch.cluster.node.DiscoveryNodes.Delta} between\n+ * the previous cluster state and the new cluster state.\n+ */\n public DiscoveryNodes.Delta nodesDelta() {\n return this.nodesDelta;\n }\n \n+ /**\n+ * Returns <code>true</code> iff nodes have been removed from the cluster since the last cluster state.\n+ */\n public boolean nodesRemoved() {\n return nodesDelta.removed();\n }\n \n+ /**\n+ * Returns <code>true</code> iff nodes have been added from the cluster since the last cluster state.\n+ */\n public boolean nodesAdded() {\n return nodesDelta.added();\n }\n \n+ /**\n+ * Returns <code>true</code> iff nodes have been changed (added or removed) from the cluster since the last cluster state.\n+ */\n public boolean nodesChanged() {\n return nodesRemoved() || nodesAdded();\n }\n \n- /**\n- * Checks if this cluster state comes from a different master than the previous one.\n- * This is a workaround for the scenario where a node misses a cluster state that has either\n- * no master block or state not recovered flag set. In this case we must make sure that\n- * if an index is missing from the cluster state is not deleted immediately but instead imported\n- * as dangling. See discussion on https://github.com/elastic/elasticsearch/pull/9952\n- */\n- private boolean hasNewMaster() {\n- String oldMaster = previousState().getNodes().masterNodeId();\n- String newMaster = state().getNodes().masterNodeId();\n- if (oldMaster == null && newMaster == null) {\n- return false;\n- }\n- if (oldMaster == null && newMaster != null) {\n- return true;\n- }\n- return oldMaster.equals(newMaster) == false;\n+ // Determines whether or not the current cluster state represents an entirely\n+ // different cluster from the previous cluster state, which will happen when a\n+ // master node is elected that has never been part of the cluster before.\n+ private boolean isNewCluster() {\n+ final String prevClusterUUID = previousState.metaData().clusterUUID();\n+ final String currClusterUUID = state.metaData().clusterUUID();\n+ return prevClusterUUID.equals(currClusterUUID) == false;\n }\n-}\n\\ No newline at end of file\n+}", "filename": "core/src/main/java/org/elasticsearch/cluster/ClusterChangedEvent.java", "status": "modified" }, { "diff": "@@ -46,6 +46,11 @@\n */\n public class DiscoveryNode implements Streamable, ToXContent {\n \n+ public static final String DATA_ATTR = \"data\";\n+ public static final String MASTER_ATTR = \"master\";\n+ public static final String CLIENT_ATTR = \"client\";\n+ public static final String INGEST_ATTR = \"ingest\";\n+\n public static boolean localNode(Settings settings) {\n if (Node.NODE_LOCAL_SETTING.exists(settings)) {\n return Node.NODE_LOCAL_SETTING.get(settings);\n@@ -274,7 +279,7 @@ public ImmutableOpenMap<String, String> getAttributes() {\n * Should this node hold data (shards) or not.\n */\n public boolean dataNode() {\n- String data = attributes.get(\"data\");\n+ String data = attributes.get(DATA_ATTR);\n if (data == null) {\n return !clientNode();\n }\n@@ -292,7 +297,7 @@ public boolean isDataNode() {\n * Is the node a client node or not.\n */\n public boolean clientNode() {\n- String client = attributes.get(\"client\");\n+ String client = attributes.get(CLIENT_ATTR);\n return client != null && Booleans.parseBooleanExact(client);\n }\n \n@@ -304,7 +309,7 @@ public boolean isClientNode() {\n * Can this node become master or not.\n */\n public boolean masterNode() {\n- String master = attributes.get(\"master\");\n+ String master = attributes.get(MASTER_ATTR);\n if (master == null) {\n return !clientNode();\n }\n@@ -322,7 +327,7 @@ public boolean isMasterNode() {\n * Returns a boolean that tells whether this an ingest node or not\n */\n public boolean isIngestNode() {\n- String ingest = attributes.get(\"ingest\");\n+ String ingest = attributes.get(INGEST_ATTR);\n return ingest == null ? true : Booleans.parseBooleanExact(ingest);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java", "status": "modified" }, { "diff": "@@ -0,0 +1,375 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster;\n+\n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.RoutingTable;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.collect.MapBuilder;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.transport.DummyTransportAddress;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.Map;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ * Tests for the {@link ClusterChangedEvent} class.\n+ */\n+public class ClusterChangedEventTests extends ESTestCase {\n+\n+ private static final ClusterName TEST_CLUSTER_NAME = new ClusterName(\"test\");\n+ private static final int INDICES_CHANGE_NUM_TESTS = 5;\n+ private static final String NODE_ID_PREFIX = \"node_\";\n+ private static final String INITIAL_CLUSTER_ID = Strings.randomBase64UUID();\n+ // the initial indices which every cluster state test starts out with\n+ private static final List<String> initialIndices = Arrays.asList(\"idx1\", \"idx2\", \"idx3\");\n+ // index settings\n+ private static final Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n+\n+ /**\n+ * Test basic properties of the ClusterChangedEvent class:\n+ * (1) make sure there are no null values for any of its properties\n+ * (2) make sure you can't create a ClusterChangedEvent with any null values\n+ */\n+ public void testBasicProperties() {\n+ ClusterState newState = createSimpleClusterState();\n+ ClusterState previousState = createSimpleClusterState();\n+ ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", newState, previousState);\n+ assertThat(event.source(), equalTo(\"_na_\"));\n+ assertThat(event.state(), equalTo(newState));\n+ assertThat(event.previousState(), equalTo(previousState));\n+ assertNotNull(\"nodesDelta should not be null\", event.nodesDelta());\n+\n+ // should not be able to create a ClusterChangedEvent with null values for any of the constructor args\n+ try {\n+ event = new ClusterChangedEvent(null, newState, previousState);\n+ fail(\"should not have created a ClusterChangedEvent from a null source: \" + event.source());\n+ } catch (NullPointerException e) {\n+ }\n+ try {\n+ event = new ClusterChangedEvent(\"_na_\", null, previousState);\n+ fail(\"should not have created a ClusterChangedEvent from a null state: \" + event.state());\n+ } catch (NullPointerException e) {\n+ }\n+ try {\n+ event = new ClusterChangedEvent(\"_na_\", newState, null);\n+ fail(\"should not have created a ClusterChangedEvent from a null previousState: \" + event.previousState());\n+ } catch (NullPointerException e) {\n+ }\n+ }\n+\n+ /**\n+ * Test whether the ClusterChangedEvent returns the correct value for whether the local node is master,\n+ * based on what was set on the cluster state.\n+ */\n+ public void testLocalNodeIsMaster() {\n+ final int numNodesInCluster = 3;\n+ ClusterState previousState = createSimpleClusterState();\n+ ClusterState newState = createState(numNodesInCluster, true, initialIndices);\n+ ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", newState, previousState);\n+ assertTrue(\"local node should be master\", event.localNodeMaster());\n+\n+ newState = createState(numNodesInCluster, false, initialIndices);\n+ event = new ClusterChangedEvent(\"_na_\", newState, previousState);\n+ assertFalse(\"local node should not be master\", event.localNodeMaster());\n+ }\n+\n+ /**\n+ * Test that the indices created and indices deleted lists between two cluster states\n+ * are correct when there is no change in the cluster UUID. Also tests metadata equality\n+ * between cluster states.\n+ */\n+ public void testMetaDataChangesOnNoMasterChange() {\n+ metaDataChangesCheck(false);\n+ }\n+\n+ /**\n+ * Test that the indices created and indices deleted lists between two cluster states\n+ * are correct when there is a change in the cluster UUID. Also tests metadata equality\n+ * between cluster states.\n+ */\n+ public void testMetaDataChangesOnNewClusterUUID() {\n+ metaDataChangesCheck(true);\n+ }\n+\n+ /**\n+ * Test the index metadata change check.\n+ */\n+ public void testIndexMetaDataChange() {\n+ final int numNodesInCluster = 3;\n+ final ClusterState originalState = createState(numNodesInCluster, randomBoolean(), initialIndices);\n+ final ClusterState newState = originalState; // doesn't matter for this test, just need a non-null value\n+ final ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", originalState, newState);\n+\n+ // test when its not the same IndexMetaData\n+ final String indexId = initialIndices.get(0);\n+ final IndexMetaData originalIndexMeta = originalState.metaData().index(indexId);\n+ // make sure the metadata is actually on the cluster state\n+ assertNotNull(\"IndexMetaData for \" + indexId + \" should exist on the cluster state\", originalIndexMeta);\n+ IndexMetaData newIndexMeta = createIndexMetadata(indexId, originalIndexMeta.getVersion() + 1);\n+ assertTrue(\"IndexMetaData with different version numbers must be considered changed\", event.indexMetaDataChanged(newIndexMeta));\n+\n+ // test when it doesn't exist\n+ newIndexMeta = createIndexMetadata(\"doesntexist\");\n+ assertTrue(\"IndexMetaData that didn't previously exist should be considered changed\", event.indexMetaDataChanged(newIndexMeta));\n+\n+ // test when its the same IndexMetaData\n+ assertFalse(\"IndexMetaData should be the same\", event.indexMetaDataChanged(originalIndexMeta));\n+ }\n+\n+ /**\n+ * Test nodes added/removed/changed checks.\n+ */\n+ public void testNodesAddedAndRemovedAndChanged() {\n+ final int numNodesInCluster = 4;\n+ final ClusterState originalState = createState(numNodesInCluster, randomBoolean(), initialIndices);\n+\n+ // test when nodes have not been added or removed between cluster states\n+ ClusterState newState = createState(numNodesInCluster, randomBoolean(), initialIndices);\n+ ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", newState, originalState);\n+ assertFalse(\"Nodes should not have been added between cluster states\", event.nodesAdded());\n+ assertFalse(\"Nodes should not have been removed between cluster states\", event.nodesRemoved());\n+ assertFalse(\"Nodes should not have been changed between cluster states\", event.nodesChanged());\n+\n+ // test when nodes have been removed between cluster states\n+ newState = createState(numNodesInCluster - 1, randomBoolean(), initialIndices);\n+ event = new ClusterChangedEvent(\"_na_\", newState, originalState);\n+ assertTrue(\"Nodes should have been removed between cluster states\", event.nodesRemoved());\n+ assertFalse(\"Nodes should not have been added between cluster states\", event.nodesAdded());\n+ assertTrue(\"Nodes should have been changed between cluster states\", event.nodesChanged());\n+\n+ // test when nodes have been added between cluster states\n+ newState = createState(numNodesInCluster + 1, randomBoolean(), initialIndices);\n+ event = new ClusterChangedEvent(\"_na_\", newState, originalState);\n+ assertFalse(\"Nodes should not have been removed between cluster states\", event.nodesRemoved());\n+ assertTrue(\"Nodes should have been added between cluster states\", event.nodesAdded());\n+ assertTrue(\"Nodes should have been changed between cluster states\", event.nodesChanged());\n+\n+ // test when nodes both added and removed between cluster states\n+ // here we reuse the newState from the previous run which already added extra nodes\n+ newState = nextState(newState, randomBoolean(), Collections.emptyList(), Collections.emptyList(), 1);\n+ event = new ClusterChangedEvent(\"_na_\", newState, originalState);\n+ assertTrue(\"Nodes should have been removed between cluster states\", event.nodesRemoved());\n+ assertTrue(\"Nodes should have been added between cluster states\", event.nodesAdded());\n+ assertTrue(\"Nodes should have been changed between cluster states\", event.nodesChanged());\n+ }\n+\n+ /**\n+ * Test the routing table changes checks.\n+ */\n+ public void testRoutingTableChanges() {\n+ final int numNodesInCluster = 3;\n+ final ClusterState originalState = createState(numNodesInCluster, randomBoolean(), initialIndices);\n+\n+ // routing tables and index routing tables are same object\n+ ClusterState newState = ClusterState.builder(originalState).build();\n+ ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", originalState, newState);\n+ assertFalse(\"routing tables should be the same object\", event.routingTableChanged());\n+ assertFalse(\"index routing table should be the same object\", event.indexRoutingTableChanged(initialIndices.get(0)));\n+\n+ // routing tables and index routing tables aren't same object\n+ newState = createState(numNodesInCluster, randomBoolean(), initialIndices);\n+ event = new ClusterChangedEvent(\"_na_\", originalState, newState);\n+ assertTrue(\"routing tables should not be the same object\", event.routingTableChanged());\n+ assertTrue(\"index routing table should not be the same object\", event.indexRoutingTableChanged(initialIndices.get(0)));\n+\n+ // index routing tables are different because they don't exist\n+ newState = createState(numNodesInCluster, randomBoolean(), initialIndices.subList(1, initialIndices.size()));\n+ event = new ClusterChangedEvent(\"_na_\", originalState, newState);\n+ assertTrue(\"routing tables should not be the same object\", event.routingTableChanged());\n+ assertTrue(\"index routing table should not be the same object\", event.indexRoutingTableChanged(initialIndices.get(0)));\n+ }\n+\n+ // Tests that the indices change list is correct as well as metadata equality when the metadata has changed.\n+ private static void metaDataChangesCheck(final boolean changeClusterUUID) {\n+ final int numNodesInCluster = 3;\n+ for (int i = 0; i < INDICES_CHANGE_NUM_TESTS; i++) {\n+ final ClusterState previousState = createState(numNodesInCluster, randomBoolean(), initialIndices);\n+ final int numAdd = randomIntBetween(0, 5); // add random # of indices to the next cluster state\n+ final int numDel = randomIntBetween(0, initialIndices.size()); // delete random # of indices from the next cluster state\n+ final List<String> addedIndices = addIndices(numAdd);\n+ final List<String> delIndices = delIndices(numDel, initialIndices);\n+ final ClusterState newState = nextState(previousState, changeClusterUUID, addedIndices, delIndices, 0);\n+ final ClusterChangedEvent event = new ClusterChangedEvent(\"_na_\", newState, previousState);\n+ final List<String> addsFromEvent = event.indicesCreated();\n+ final List<String> delsFromEvent = event.indicesDeleted();\n+ Collections.sort(addsFromEvent);\n+ Collections.sort(delsFromEvent);\n+ assertThat(addsFromEvent, equalTo(addedIndices));\n+ assertThat(delsFromEvent, changeClusterUUID ? equalTo(Collections.emptyList()) : equalTo(delIndices));\n+ assertThat(event.metaDataChanged(), equalTo(changeClusterUUID || addedIndices.size() > 0 || delIndices.size() > 0));\n+ }\n+ }\n+\n+ private static ClusterState createSimpleClusterState() {\n+ return ClusterState.builder(TEST_CLUSTER_NAME).build();\n+ }\n+\n+ // Create a basic cluster state with a given set of indices\n+ private static ClusterState createState(final int numNodes, final boolean isLocalMaster, final List<String> indices) {\n+ final MetaData metaData = createMetaData(indices);\n+ return ClusterState.builder(TEST_CLUSTER_NAME)\n+ .nodes(createDiscoveryNodes(numNodes, isLocalMaster))\n+ .metaData(metaData)\n+ .routingTable(createRoutingTable(1, metaData))\n+ .build();\n+ }\n+\n+ // Create a modified cluster state from another one, but with some number of indices added and deleted.\n+ private static ClusterState nextState(final ClusterState previousState, final boolean changeClusterUUID,\n+ final List<String> addedIndices, final List<String> deletedIndices,\n+ final int numNodesToRemove) {\n+ final ClusterState.Builder builder = ClusterState.builder(previousState);\n+ builder.stateUUID(Strings.randomBase64UUID());\n+ final MetaData.Builder metaBuilder = MetaData.builder(previousState.metaData());\n+ if (changeClusterUUID || addedIndices.size() > 0 || deletedIndices.size() > 0) {\n+ // there is some change in metadata cluster state\n+ if (changeClusterUUID) {\n+ metaBuilder.clusterUUID(Strings.randomBase64UUID());\n+ }\n+ for (String index : addedIndices) {\n+ metaBuilder.put(createIndexMetadata(index), true);\n+ }\n+ for (String index : deletedIndices) {\n+ metaBuilder.remove(index);\n+ }\n+ builder.metaData(metaBuilder);\n+ }\n+ if (numNodesToRemove > 0) {\n+ final int discoveryNodesSize = previousState.getNodes().size();\n+ final DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(previousState.getNodes());\n+ for (int i = 0; i < numNodesToRemove && i < discoveryNodesSize; i++) {\n+ nodesBuilder.remove(NODE_ID_PREFIX + i);\n+ }\n+ builder.nodes(nodesBuilder);\n+ }\n+ return builder.build();\n+ }\n+\n+ // Create the discovery nodes for a cluster state. For our testing purposes, we want\n+ // the first to be master, the second to be master eligible, the third to be a data node,\n+ // and the remainder can be any kinds of nodes (master eligible, data, or both).\n+ private static DiscoveryNodes createDiscoveryNodes(final int numNodes, final boolean isLocalMaster) {\n+ assert (numNodes >= 3) : \"the initial cluster state for event change tests should have a minimum of 3 nodes \" +\n+ \"so there are a minimum of 2 master nodes for testing master change events.\";\n+ final DiscoveryNodes.Builder builder = DiscoveryNodes.builder();\n+ final int localNodeIndex = isLocalMaster ? 0 : randomIntBetween(1, numNodes - 1); // randomly assign the local node if not master\n+ for (int i = 0; i < numNodes; i++) {\n+ final String nodeId = NODE_ID_PREFIX + i;\n+ boolean isMasterEligible = false;\n+ boolean isData = false;\n+ if (i == 0) {\n+ // the master node\n+ builder.masterNodeId(nodeId);\n+ isMasterEligible = true;\n+ } else if (i == 1) {\n+ // the alternate master node\n+ isMasterEligible = true;\n+ } else if (i == 2) {\n+ // we need at least one data node\n+ isData = true;\n+ } else {\n+ // remaining nodes can be anything (except for master)\n+ isMasterEligible = randomBoolean();\n+ isData = randomBoolean();\n+ }\n+ final DiscoveryNode node = newNode(nodeId, isMasterEligible, isData);\n+ builder.put(node);\n+ if (i == localNodeIndex) {\n+ builder.localNodeId(nodeId);\n+ }\n+ }\n+ return builder.build();\n+ }\n+\n+ // Create a new DiscoveryNode\n+ private static DiscoveryNode newNode(final String nodeId, boolean isMasterEligible, boolean isData) {\n+ final Map<String, String> attributes = MapBuilder.<String, String>newMapBuilder()\n+ .put(DiscoveryNode.MASTER_ATTR, isMasterEligible ? \"true\" : \"false\")\n+ .put(DiscoveryNode.DATA_ATTR, isData ? \"true\": \"false\")\n+ .immutableMap();\n+ return new DiscoveryNode(nodeId, nodeId, DummyTransportAddress.INSTANCE, attributes, Version.CURRENT);\n+ }\n+\n+ // Create the metadata for a cluster state.\n+ private static MetaData createMetaData(final List<String> indices) {\n+ final MetaData.Builder builder = MetaData.builder();\n+ builder.clusterUUID(INITIAL_CLUSTER_ID);\n+ for (String index : indices) {\n+ builder.put(createIndexMetadata(index), true);\n+ }\n+ return builder.build();\n+ }\n+\n+ // Create the index metadata for a given index.\n+ private static IndexMetaData createIndexMetadata(final String index) {\n+ return createIndexMetadata(index, 1);\n+ }\n+\n+ // Create the index metadata for a given index, with the specified version.\n+ private static IndexMetaData createIndexMetadata(final String index, final long version) {\n+ return IndexMetaData.builder(index)\n+ .settings(settings)\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)\n+ .creationDate(System.currentTimeMillis())\n+ .version(version)\n+ .build();\n+ }\n+\n+ // Create the routing table for a cluster state.\n+ private static RoutingTable createRoutingTable(final long version, final MetaData metaData) {\n+ final RoutingTable.Builder builder = RoutingTable.builder().version(version);\n+ for (ObjectCursor<IndexMetaData> cursor : metaData.indices().values()) {\n+ builder.addAsNew(cursor.value);\n+ }\n+ return builder.build();\n+ }\n+\n+ // Create a list of indices to add\n+ private static List<String> addIndices(final int numIndices) {\n+ final List<String> list = new ArrayList<>();\n+ for (int i = 0; i < numIndices; i++) {\n+ list.add(\"newIdx_\" + i);\n+ }\n+ return list;\n+ }\n+\n+ // Create a list of indices to delete from a list that already belongs to a particular cluster state.\n+ private static List<String> delIndices(final int numIndices, final List<String> currIndices) {\n+ final List<String> list = new ArrayList<>();\n+ for (int i = 0; i < numIndices; i++) {\n+ list.add(currIndices.get(i));\n+ }\n+ return list;\n+ }\n+\n+}", "filename": "core/src/test/java/org/elasticsearch/cluster/ClusterChangedEventTests.java", "status": "added" }, { "diff": "@@ -581,8 +581,7 @@ public void testMasterNodeGCs() throws Exception {\n \n // restore GC\n masterNodeDisruption.stopDisrupting();\n- ensureStableCluster(3, new TimeValue(DISRUPTION_HEALING_OVERHEAD.millis() + masterNodeDisruption.expectedTimeToHeal().millis()), false,\n- oldNonMasterNodes.get(0));\n+ ensureStableCluster(3, new TimeValue(DISRUPTION_HEALING_OVERHEAD.millis() + masterNodeDisruption.expectedTimeToHeal().millis()), false, oldNonMasterNodes.get(0));\n \n // make sure all nodes agree on master\n String newMaster = internalCluster().getMasterName();\n@@ -1072,11 +1071,13 @@ public boolean clearData(String nodeName) {\n assertTrue(client().prepareGet(\"index\", \"doc\", \"1\").get().isExists());\n }\n \n- // tests if indices are really deleted even if a master transition inbetween\n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/11665\")\n+ /**\n+ * Tests that indices are properly deleted even if there is a master transition in between.\n+ * Test for https://github.com/elastic/elasticsearch/issues/11665\n+ */\n public void testIndicesDeleted() throws Exception {\n configureUnicastCluster(3, null, 2);\n- InternalTestCluster.Async<List<String>> masterNodes= internalCluster().startMasterOnlyNodesAsync(2);\n+ InternalTestCluster.Async<List<String>> masterNodes = internalCluster().startMasterOnlyNodesAsync(2);\n InternalTestCluster.Async<String> dataNode = internalCluster().startDataOnlyNodeAsync();\n dataNode.get();\n masterNodes.get();", "filename": "core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java", "status": "modified" } ] }
{ "body": "With the following configuration\n\n```\nnode.name: ${prompt.text}\n```\n\nElasticsearch 1.6.0 prompts you twice. Once for \"node.name\" and \"name\". Ultimately it uses the second one for the value of the configuration item:\n\n```\ndjschny:elasticsearch-1.6.0 djschny$ ../startElastic.sh \nEnter value for [node.name]: foo\nEnter value for [name]: bar\n[2015-06-09 16:10:03,405][INFO ][node ] [bar] version[1.6.0], pid[4836], build[cdd3ac4/2015-06-09T13:36:34Z]\n[2015-06-09 16:10:03,405][INFO ][node ] [bar] initializing ...\n```\n", "comments": [ { "body": "@jaymode please could you take a look\n", "created_at": "2015-06-12T14:04:01Z" }, { "body": "Looks to be still a problem on 2.1. Regression here? @GlenRSmith and I can confirm this occurs with the 2.1 release archive.\n", "created_at": "2015-12-17T23:47:41Z" }, { "body": "It seems to be independent of which config item you want to prompt for:\n\n```\nelasticsearch-2.1.0  bin/elasticsearch\nEnter value for [cluster.name]: foo\nEnter value for [cluster.name]: bar\n[2015-12-18 10:48:50,219][INFO ][node ] [Wendell Vaughn] version[2.1.0], pid[21231], build[72cd1f1/2015-11-18T22:40:03Z]\n[2015-12-18 10:48:50,220][INFO ][node ] [Wendell Vaughn] initializing ...\n[2015-12-18 10:48:50,280][INFO ][plugins ] [Wendell Vaughn] loaded [], sites []\n[2015-12-18 10:48:50,304][INFO ][env ] [Wendell Vaughn] using [1] data paths, mounts [[/ (/dev/mapper/fedora_josh--xps13-root)]], net usable_space [79.1gb], net total_space [233.9gb], spins? [no], types [ext4]\n[2015-12-18 10:48:51,761][INFO ][node ] [Wendell Vaughn] initialized\n[2015-12-18 10:48:51,762][INFO ][node ] [Wendell Vaughn] starting ...\n[2015-12-18 10:48:51,902][INFO ][transport ] [Wendell Vaughn] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}\n[2015-12-18 10:48:51,912][INFO ][discovery ] [Wendell Vaughn] bar/azOFxYXMQ6muCpGOpqvxSw\n```\n", "created_at": "2015-12-17T23:49:53Z" }, { "body": "Just confirmed on 2.1.1.\n", "created_at": "2015-12-18T00:00:40Z" }, { "body": "This is different than the original issue. I think what is happening is we first initialize the settings/environment in bootstrap so that we can eg init logging, but then when bootstrap creates the node, it passes in a fresh Settings, and the Node constructor again initializes settings/environment.\n", "created_at": "2015-12-18T06:52:08Z" }, { "body": "@jaymode could you take a look at this please?\n", "created_at": "2016-01-18T20:16:45Z" }, { "body": "As @rjernst said, this is a different issue that the original. `BootstrapCLIParser` was added which extends CLITool. CLITool prepares the settings and environment which includes prompting since CLITools are usually run outside of the bootstrap process. This causes the first prompt that is ignored. The second prompt that is used, comes from `Bootstrap`, which is passed to the node.\n\nPart of the issue is that the `BootstrapCLIParser` sets properties that will change the value of settings. I think we can solve this a few different ways:\n1. Pass in empty settings/null environment for this CLITool. If we try to create a valid environment here then we have to prepare the settings to ensure we parse the paths from the settings for our directories\n2. Do not use the `CLITool` infrastructure\n3. Prepare the environment in bootstrap, pass to `BootstrapCLIParser`. Re-prepare the settings/environment, passing in the already prepared settings.\n\n@spinscale @rjernst any thoughts?\n", "created_at": "2016-01-19T15:07:27Z" }, { "body": "I would say preparing the environment once would be the best option, it will just take some refactoring to pass it through. Running prepare already creates Environment twice (the first time so it can try and load the config file, which might have other paths like plugin path or data path). Really I think we should simply not allow paths to be configured in elasticsearch.yml, instead it should only be through sysprops. It might be something that could simplify the settings/env prep, to make this a little easier.\n", "created_at": "2016-01-19T19:46:38Z" }, { "body": "I have a fix for this that will come as part of #16579.\n", "created_at": "2016-02-24T02:40:12Z" }, { "body": "Closed by #17024.\n", "created_at": "2016-03-14T00:07:24Z" } ], "number": 11564, "title": "${prompt.text} and ${prompt.secret} double prompting" }
{ "body": "The purpose of this commit is to simplify the parsing of command-line\noptions. With it, the following breaking changes are made\n- double-dash properties are no longer supported\n- setting properties should be done via -E\n- only the prefix \"es.\" for setting properties is supported now\n\nThis pull request also removes setting properties to pass information\nabout the command line arguments daemonize and pidfile from the CLI\nparser to the bootstrap process.\n\nAs an added win, the environment is not prepared twice.\n\nCloses #16579, closes #11564 \n", "number": 16791, "review_comments": [ { "body": "This double-negative is kind of confusing, should be `if (Strings.hasText(pidFile)) {` instead, which also correctly handles strings like `\" \"`\n", "created_at": "2016-02-24T19:04:28Z" }, { "body": "`daemonize == false`?\n", "created_at": "2016-02-24T19:04:49Z" }, { "body": "These non-immutable classes :(\n", "created_at": "2016-02-24T19:08:53Z" }, { "body": "can we also support `--version` to follow the usual *nix conventions?\n", "created_at": "2016-02-24T19:11:47Z" }, { "body": "This can be simplified to:\n\n``` sh\nif [ \"$COMMAND\" = \"start\" ] || [ \"$COMMAND\" = \"version\" ]; then\n shift\nelse\n COMMAND=\"start\"\nfi\n```\n", "created_at": "2016-02-24T19:15:48Z" }, { "body": "Addressed in 73b12ac1915e3315c01acea49a8ef06b112e28b6.\n", "created_at": "2016-02-24T19:31:31Z" }, { "body": "Addressed in 8fcd9accbc4f38ffdb0783ba5e6b6ce211e745d2.\n", "created_at": "2016-02-24T19:31:36Z" }, { "body": "Good. Addressed in 00257f9f5d2611dbfbd11a0a9c543fe2cd57a60b.\n", "created_at": "2016-02-24T19:31:58Z" }, { "body": "@dakrone By the way, my preference would be to not be lenient here (i.e., assume `start` if no command is provided but I doubt that I will be able to convince others of that).\n", "created_at": "2016-02-24T19:38:28Z" }, { "body": "@dakrone Is there really a convention here? `ant` doesn't support `--version`, for example. I'm not sure I see the need for it, and from a technical perspective I just don't want to maintain two ways to do the same thing.\n", "created_at": "2016-02-24T19:41:17Z" }, { "body": "I think `ant` is a bad example (all of the Java tools are non-unix-y), but I'm totally +0 on adding it, if it's not too much trouble. I think it helps the \"principal of least surprise\":\n\n```\n~/es/elasticsearch:master λ ls --version\nls (GNU coreutils) 8.24\nCopyright (C) 2015 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law.\n\nWritten by Richard M. Stallman and David MacKenzie.\n\n~/es/elasticsearch:pr/16791 λ gdb --version\nGNU gdb (GDB) Fedora 7.10.1-30.fc23\nCopyright (C) 2015 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\nThis GDB was configured as \"x86_64-redhat-linux-gnu\".\nType \"show configuration\" for configuration details.\nFor bug reporting instructions, please see:\n<http://www.gnu.org/software/gdb/bugs/>.\nFind the GDB manual and other documentation resources online at:\n<http://www.gnu.org/software/gdb/documentation/>.\nFor help, type \"help\".\nType \"apropos word\" to search for commands related to \"word\".\n\n~/es/elasticsearch:pr/16791 λ cat --version\ncat (GNU coreutils) 8.24\nCopyright (C) 2015 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law.\n\nWritten by Torbjörn Granlund and Richard M. Stallman.\n\n~/es/elasticsearch:pr/16791 λ emacs --versio\nGNU Emacs 25.0.90.1\nCopyright (C) 2016 Free Software Foundation, Inc.\nGNU Emacs comes with ABSOLUTELY NO WARRANTY.\nYou may redistribute copies of GNU Emacs\nunder the terms of the GNU General Public License.\nFor more information about these matters, see the file named COPYING.\n\n~/es/elasticsearch:pr/16791 λ less --version\nless 481 (POSIX regular expressions)\nCopyright (C) 1984-2015 Mark Nudelman\n\nless comes with NO WARRANTY, to the extent permitted by law.\nFor information about the terms of redistribution,\nsee the file named README in the less distribution.\nHomepage: http://www.greenwoodsoftware.com/less\n\n~/es/elasticsearch:pr/16791 λ gcc --version\ngcc (GCC) 5.3.1 20151207 (Red Hat 5.3.1-2)\nCopyright (C) 2015 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n~/es/elasticsearch:pr/16791 λ systemctl --version\nsystemd 222\n+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN\n\n~/es/elasticsearch:pr/16791 λ git --version\ngit version 2.5.0\n```\n", "created_at": "2016-02-25T20:46:43Z" }, { "body": "Uh wait, why are we adding another dependency!!??? Can we please not have 2 cli parsing utilities...\n", "created_at": "2016-02-25T22:36:01Z" }, { "body": "I agree we should just support `--version`. And I don't see why we need these \"commands\" here, the logic should be simple: `if (--version) printVersionAndExit();`\n", "created_at": "2016-02-25T22:42:55Z" }, { "body": "> Uh wait, why are we adding another dependency!!??? Can we please not have 2 cli parsing utilities...\n\n@rjernst My plan is to get rid of commons-cli, but I'm not biting everything off in one PR.\n", "created_at": "2016-02-25T22:52:21Z" } ], "title": "Simplify parsing of command-line arguments" }
{ "commits": [ { "message": "Simplify parsing of command-line arguments\n\nThe purpose of this commit is to simplify the parsing of command-line\noptions. With it, the following breaking changes are made\n - double-dash properties are no longer supported\n - setting properties should be done via -E\n - only the prefix \"es.\" for setting properties is supported now" }, { "message": "Simplify terminal setup in Bootstrap#initialSettings" }, { "message": "Simplify pidfile in Bootstrap#initialSettings" }, { "message": "Simplify command parsing in bin/elasticsearch" }, { "message": "Fix integration test startup" }, { "message": "Fix checkstyle violation in ECLPT.java" }, { "message": "Remove obsolete checkstyle suppressions" }, { "message": "Reword Elasticsearch properties to settings" }, { "message": "Avoid calling System#exit on successful start" }, { "message": "Hide path.home setting" } ], "files": [ { "diff": "@@ -407,6 +407,7 @@ class BuildPlugin implements Plugin<Project> {\n systemProperty 'jna.nosys', 'true'\n // default test sysprop values\n systemProperty 'tests.ifNoTests', 'fail'\n+ // TODO: remove setting logging level via system property\n systemProperty 'es.logger.level', 'WARN'\n for (Map.Entry<String, String> property : System.properties.entrySet()) {\n if (property.getKey().startsWith('tests.') ||", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy", "status": "modified" }, { "diff": "@@ -23,8 +23,6 @@ import org.gradle.api.Project\n import org.gradle.api.file.FileCollection\n import org.gradle.api.tasks.Input\n \n-import java.time.LocalDateTime\n-\n /** Configuration for an elasticsearch cluster, used for integration tests. */\n class ClusterConfiguration {\n \n@@ -64,6 +62,8 @@ class ClusterConfiguration {\n return tmpFile.exists()\n }\n \n+ Map<String, String> esSettings = new HashMap<>();\n+\n Map<String, String> systemProperties = new HashMap<>()\n \n Map<String, String> settings = new HashMap<>()\n@@ -77,6 +77,11 @@ class ClusterConfiguration {\n \n LinkedHashMap<String, Object[]> setupCommands = new LinkedHashMap<>()\n \n+ @Input\n+ void esSetting(String setting, String value) {\n+ esSettings.put(setting, value);\n+ }\n+\n @Input\n void systemProperty(String property, String value) {\n systemProperties.put(property, value)", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy", "status": "modified" }, { "diff": "@@ -129,14 +129,16 @@ class NodeInfo {\n 'JAVA_HOME' : project.javaHome,\n 'ES_GC_OPTS': config.jvmArgs // we pass these with the undocumented gc opts so the argline can set gc, etc\n ]\n- args.add(\"-Des.node.portsfile=true\")\n+ args.addAll(\"-E\", \"es.node.portsfile=true\")\n+ args.addAll(config.esSettings.collectMany { key, value -> [\"-E\", \"${key}=${value}\" ] })\n args.addAll(config.systemProperties.collect { key, value -> \"-D${key}=${value}\" })\n for (Map.Entry<String, String> property : System.properties.entrySet()) {\n if (property.getKey().startsWith('es.')) {\n- args.add(\"-D${property.getKey()}=${property.getValue()}\")\n+ args.add(\"-E\")\n+ args.add(\"${property.getKey()}=${property.getValue()}\")\n }\n }\n- args.add(\"-Des.path.conf=${confDir}\")\n+ args.addAll(\"-E\", \"es.path.conf=${confDir}\")\n if (Os.isFamily(Os.FAMILY_WINDOWS)) {\n args.add('\"') // end the entire command, quoted\n }", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy", "status": "modified" }, { "diff": "@@ -269,7 +269,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]update[/\\\\]UpdateRequest.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]update[/\\\\]UpdateRequestBuilder.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]bootstrap[/\\\\]Bootstrap.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]bootstrap[/\\\\]BootstrapCLIParser.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]bootstrap[/\\\\]JNAKernel32Library.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]bootstrap[/\\\\]JNANatives.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]bootstrap[/\\\\]JVMCheck.java\" checks=\"LineLength\" />\n@@ -1613,7 +1612,6 @@\n <suppress files=\"plugins[/\\\\]repository-s3[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]cloud[/\\\\]aws[/\\\\]blobstore[/\\\\]MockDefaultS3OutputStream.java\" checks=\"LineLength\" />\n <suppress files=\"plugins[/\\\\]repository-s3[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]repositories[/\\\\]s3[/\\\\]AbstractS3SnapshotRestoreTest.java\" checks=\"LineLength\" />\n <suppress files=\"plugins[/\\\\]store-smb[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]apache[/\\\\]lucene[/\\\\]store[/\\\\]SmbDirectoryWrapper.java\" checks=\"LineLength\" />\n- <suppress files=\"qa[/\\\\]evil-tests[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]bootstrap[/\\\\]BootstrapCliParserTests.java\" checks=\"LineLength\" />\n <suppress files=\"qa[/\\\\]evil-tests[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]bootstrap[/\\\\]ESPolicyUnitTests.java\" checks=\"LineLength\" />\n <suppress files=\"qa[/\\\\]evil-tests[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]bootstrap[/\\\\]EvilSecurityTests.java\" checks=\"LineLength\" />\n <suppress files=\"qa[/\\\\]evil-tests[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]common[/\\\\]cli[/\\\\]CheckFileCommandTests.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -49,6 +49,7 @@ dependencies {\n \n // utilities\n compile 'commons-cli:commons-cli:1.3.1'\n+ compile 'args4j:args4j:2.33'\n compile 'com.carrotsearch:hppc:0.7.1'\n \n // time handling, remove with java 8 time", "filename": "core/build.gradle", "status": "modified" }, { "diff": "@@ -25,11 +25,10 @@\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.PidFile;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.SuppressForbidden;\n-import org.elasticsearch.common.cli.CliTool;\n import org.elasticsearch.common.cli.Terminal;\n import org.elasticsearch.common.inject.CreationException;\n-import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.logging.log4j.LogConfigurator;\n@@ -55,7 +54,7 @@\n import java.util.Set;\n import java.util.concurrent.CountDownLatch;\n \n-import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS;\n+\n \n /**\n * Internal startup code.\n@@ -208,10 +207,14 @@ private static void setupLogging(Settings settings) {\n e.printStackTrace();\n }\n }\n-\n- private static Environment initialSettings(boolean foreground) {\n- Terminal terminal = foreground ? Terminal.DEFAULT : null;\n- return InternalSettingsPreparer.prepareEnvironment(EMPTY_SETTINGS, terminal);\n+ private static Environment initialSettings(boolean daemonize, String pathHome, String pidFile) {\n+ Terminal terminal = daemonize ? null : Terminal.DEFAULT;\n+ Settings.Builder builder = Settings.builder();\n+ builder.put(Environment.PATH_HOME_SETTING.getKey(), pathHome);\n+ if (Strings.hasLength(pidFile)) {\n+ builder.put(Environment.PIDFILE_SETTING.getKey(), pidFile);\n+ }\n+ return InternalSettingsPreparer.prepareEnvironment(builder.build(), terminal);\n }\n \n private void start() {\n@@ -238,22 +241,13 @@ static void initLoggerPrefix() {\n * This method is invoked by {@link Elasticsearch#main(String[])}\n * to startup elasticsearch.\n */\n- static void init(String[] args) throws Throwable {\n+ static void init(boolean daemonize, String pathHome, String pidFile) throws Throwable {\n // Set the system property before anything has a chance to trigger its use\n initLoggerPrefix();\n \n- BootstrapCLIParser bootstrapCLIParser = new BootstrapCLIParser();\n- CliTool.ExitStatus status = bootstrapCLIParser.execute(args);\n-\n- if (CliTool.ExitStatus.OK != status) {\n- exit(status.status());\n- }\n-\n INSTANCE = new Bootstrap();\n \n- boolean foreground = !\"false\".equals(System.getProperty(\"es.foreground\", System.getProperty(\"es-foreground\")));\n-\n- Environment environment = initialSettings(foreground);\n+ Environment environment = initialSettings(daemonize, pathHome, pidFile);\n Settings settings = environment.settings();\n setupLogging(settings);\n checkForCustomConfFile();\n@@ -269,7 +263,7 @@ static void init(String[] args) throws Throwable {\n }\n \n try {\n- if (!foreground) {\n+ if (daemonize) {\n Loggers.disableConsoleLogging();\n closeSystOut();\n }\n@@ -284,12 +278,12 @@ static void init(String[] args) throws Throwable {\n \n INSTANCE.start();\n \n- if (!foreground) {\n+ if (daemonize) {\n closeSysError();\n }\n } catch (Throwable e) {\n // disable console logging, so user does not see the exception twice (jvm will show it already)\n- if (foreground) {\n+ if (!daemonize) {\n Loggers.disableConsoleLogging();\n }\n ESLogger logger = Loggers.getLogger(Bootstrap.class);\n@@ -309,7 +303,7 @@ static void init(String[] args) throws Throwable {\n logger.error(\"Exception\", e);\n }\n // re-enable it if appropriate, so they can see any logging during the shutdown process\n- if (foreground) {\n+ if (!daemonize) {\n Loggers.enableConsoleLogging();\n }\n ", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java", "status": "modified" }, { "diff": "@@ -24,16 +24,16 @@\n import java.util.Dictionary;\n import java.util.Enumeration;\n \n-/** \n- * Exposes system startup information \n+/**\n+ * Exposes system startup information\n */\n @SuppressForbidden(reason = \"exposes read-only view of system properties\")\n public final class BootstrapInfo {\n \n /** no instantiation */\n private BootstrapInfo() {}\n- \n- /** \n+\n+ /**\n * Returns true if we successfully loaded native libraries.\n * <p>\n * If this returns false, then native operations such as locking\n@@ -42,14 +42,14 @@ private BootstrapInfo() {}\n public static boolean isNativesAvailable() {\n return Natives.JNA_AVAILABLE;\n }\n- \n- /** \n+\n+ /**\n * Returns true if we were able to lock the process's address space.\n */\n public static boolean isMemoryLocked() {\n return Natives.isMemoryLocked();\n }\n- \n+\n /**\n * Returns true if secure computing mode is enabled (supported systems only)\n */\n@@ -111,7 +111,7 @@ public Object remove(Object key) {\n }\n \n /**\n- * Returns a read-only view of all system properties\n+ * Returns a read-only view of all system settings\n */\n public static Dictionary<Object,Object> getSystemProperties() {\n SecurityManager sm = System.getSecurityManager();\n@@ -120,4 +120,5 @@ public static Dictionary<Object,Object> getSystemProperties() {\n }\n return SYSTEM_PROPERTIES;\n }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/bootstrap/BootstrapInfo.java", "status": "modified" }, { "diff": "@@ -19,7 +19,26 @@\n \n package org.elasticsearch.bootstrap;\n \n+import org.elasticsearch.Build;\n+import org.elasticsearch.common.SuppressForbidden;\n+import org.elasticsearch.common.cli.Terminal;\n+import org.elasticsearch.monitor.jvm.JvmInfo;\n+import org.kohsuke.args4j.Argument;\n+import org.kohsuke.args4j.CmdLineException;\n+import org.kohsuke.args4j.CmdLineParser;\n+import org.kohsuke.args4j.Option;\n+import org.kohsuke.args4j.OptionDef;\n+import org.kohsuke.args4j.spi.MapOptionHandler;\n+import org.kohsuke.args4j.spi.Setter;\n+import org.kohsuke.args4j.spi.SubCommand;\n+import org.kohsuke.args4j.spi.SubCommandHandler;\n+import org.kohsuke.args4j.spi.SubCommands;\n+\n+import java.io.ByteArrayOutputStream;\n import java.io.IOException;\n+import java.nio.charset.StandardCharsets;\n+import java.util.HashMap;\n+import java.util.Map;\n \n /**\n * This class starts elasticsearch.\n@@ -32,14 +51,62 @@ private Elasticsearch() {}\n /**\n * Main entry point for starting elasticsearch\n */\n- public static void main(String[] args) throws StartupError {\n+ public static void main(String[] args) {\n+ int status = main(args, Terminal.DEFAULT);\n+ if (status != 0) {\n+ exit(status);\n+ }\n+ }\n+\n+ // visible for testing\n+ static int main(String[] args, Terminal terminal) {\n+ Command command = parse(args);\n+ command.execute(terminal);\n+ return command.status;\n+ }\n+\n+ // visible for testing\n+ static Command parse(String[] args) {\n+ Elasticsearch elasticsearch = new Elasticsearch();\n+ CmdLineParser parser = new CmdLineParser(elasticsearch);\n+ boolean help;\n+ int status = 0;\n+ String message = \"\";\n try {\n- Bootstrap.init(args);\n- } catch (Throwable t) {\n- // format exceptions to the console in a special way\n- // to avoid 2MB stacktraces from guice, etc.\n- throw new StartupError(t);\n+ parser.parseArgument(args);\n+ help = elasticsearch.command.help;\n+ } catch (CmdLineException e) {\n+ help = true;\n+ status = 1;\n+ message = e.getMessage() + \"\\n\";\n }\n+\n+ if (help) {\n+ if (\"version\".equals(args[0])) {\n+ message += printHelp(new CmdLineParser(new VersionCommand()));\n+ } else if (\"start\".equals(args[0])) {\n+ message += printHelp(new CmdLineParser(new StartCommand()));\n+ } else {\n+ message += \"command must be one of \\\"start\\\" or \\\"version\\\" but was \\\"\" + args[0] + \"\\\"\";\n+ }\n+ HelpCommand command = new HelpCommand();\n+ command.message = message;\n+ command.status = status;\n+ return command;\n+ } else {\n+ return elasticsearch.command;\n+ }\n+ }\n+\n+ private static String printHelp(CmdLineParser parser) {\n+ ByteArrayOutputStream stream = new ByteArrayOutputStream();\n+ parser.printUsage(stream);\n+ return new String(stream.toByteArray(), StandardCharsets.UTF_8);\n+ }\n+\n+ @SuppressForbidden(reason = \"Allowed to exit explicitly in bootstrap phase\")\n+ private static void exit(int status) {\n+ System.exit(status);\n }\n \n /**\n@@ -53,4 +120,97 @@ public static void main(String[] args) throws StartupError {\n static void close(String[] args) throws IOException {\n Bootstrap.stop();\n }\n+\n+ @Argument(handler = SubCommandHandler.class, metaVar = \"command\")\n+ @SubCommands({\n+ @SubCommand(name = \"start\", impl = StartCommand.class),\n+ @SubCommand(name = \"version\", impl = VersionCommand.class)\n+ })\n+ Command command;\n+\n+ static abstract class Command {\n+\n+ @Option(name = \"-h\", aliases = \"--help\", usage = \"print this message\")\n+ boolean help;\n+\n+ int status;\n+\n+ abstract void execute(Terminal terminal);\n+ }\n+\n+ public static class StartCommand extends Command {\n+\n+ @Option(name = \"--path.home\")\n+ String pathHome;\n+\n+ @Option(name = \"-d\", aliases = \"--daemonize\", usage = \"daemonize Elasticsearch\")\n+ boolean daemonize;\n+\n+ @Option(name = \"-p\", aliases = \"--pidfile\", usage = \"pid file location\")\n+ String pidFile;\n+\n+ @Option(name = \"-E\", handler = EsMapOptionHandler.class, usage = \"configure an Elasticsearch setting\")\n+ Map<String, String> settings = new HashMap<>();\n+\n+ @Override\n+ void execute(Terminal terminal) {\n+ try {\n+ Bootstrap.init(daemonize, pathHome, pidFile);\n+ } catch (Throwable t) {\n+ // format exceptions to the console in a special way\n+ // to avoid 2MB stacktraces from guice, etc.\n+ throw new StartupError(t);\n+ }\n+ }\n+\n+ public static class EsMapOptionHandler extends MapOptionHandler {\n+\n+ public EsMapOptionHandler(CmdLineParser parser, OptionDef option, Setter<? super Map<?, ?>> setter) {\n+ super(parser, option, setter);\n+ }\n+\n+ @Override\n+ protected void addToMap(String argument, Map m) throws CmdLineException {\n+ try {\n+ super.addToMap(argument, m);\n+ } catch (CmdLineException e) {\n+ throw new CmdLineException(this.owner, e.getMessage() + \" for parameter [\" + argument + \"]\", e);\n+ }\n+ }\n+\n+ @SuppressForbidden(reason = \"Sets system properties passed as CLI parameters\")\n+ @Override\n+ protected void addToMap(Map m, String key, String value) {\n+ if (!key.startsWith(\"es.\")) {\n+ throw new IllegalArgumentException(\"Elasticsearch settings must be prefixed with \\\"es.\\\" but was \\\"\" + key + \"\\\"\");\n+ }\n+ System.setProperty(key, value);\n+ super.addToMap(m, key, value);\n+ }\n+\n+ }\n+\n+ }\n+\n+ public static class VersionCommand extends Command {\n+\n+ @Override\n+ void execute(Terminal terminal) {\n+ terminal.println(\"Version: \" + org.elasticsearch.Version.CURRENT\n+ + \", Build: \" + Build.CURRENT.shortHash() + \"/\" + Build.CURRENT.date()\n+ + \", JVM: \" + JvmInfo.jvmInfo().version());\n+ }\n+\n+ }\n+\n+ public static class HelpCommand extends Command {\n+\n+ String message;\n+\n+ @Override\n+ void execute(Terminal terminal) {\n+ terminal.println(message);\n+ }\n+\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java", "status": "modified" }, { "diff": "@@ -106,9 +106,7 @@ public static void configure(Settings settings, boolean resolveConfig) {\n if (resolveConfig) {\n resolveConfig(environment, settingsBuilder);\n }\n- settingsBuilder\n- .putProperties(\"elasticsearch.\", BootstrapInfo.getSystemProperties())\n- .putProperties(\"es.\", BootstrapInfo.getSystemProperties());\n+ settingsBuilder.putProperties(\"es.\", BootstrapInfo.getSystemProperties());\n // add custom settings after config was added so that they are not overwritten by config\n settingsBuilder.put(settings);\n settingsBuilder.replacePropertyPlaceholders();", "filename": "core/src/main/java/org/elasticsearch/common/logging/log4j/LogConfigurator.java", "status": "modified" }, { "diff": "@@ -1136,10 +1136,10 @@ public Builder loadFromStream(String resourceName, InputStream is) throws Settin\n * @param properties The properties to put\n * @return The builder\n */\n- public Builder putProperties(String prefix, Dictionary<Object,Object> properties) {\n- for (Object key1 : Collections.list(properties.keys())) {\n- String key = Objects.toString(key1);\n- String value = Objects.toString(properties.get(key));\n+ public Builder putProperties(String prefix, Dictionary<Object, Object> properties) {\n+ for (Object property : Collections.list(properties.keys())) {\n+ String key = Objects.toString(property);\n+ String value = Objects.toString(properties.get(property));\n if (key.startsWith(prefix)) {\n map.put(key.substring(prefix.length()), value);\n }\n@@ -1154,19 +1154,12 @@ public Builder putProperties(String prefix, Dictionary<Object,Object> properties\n * @param properties The properties to put\n * @return The builder\n */\n- public Builder putProperties(String prefix, Dictionary<Object,Object> properties, String[] ignorePrefixes) {\n- for (Object key1 : Collections.list(properties.keys())) {\n- String key = Objects.toString(key1);\n- String value = Objects.toString(properties.get(key));\n+ public Builder putProperties(String prefix, Dictionary<Object, Object> properties, String ignorePrefix) {\n+ for (Object property : Collections.list(properties.keys())) {\n+ String key = Objects.toString(property);\n+ String value = Objects.toString(properties.get(property));\n if (key.startsWith(prefix)) {\n- boolean ignore = false;\n- for (String ignorePrefix : ignorePrefixes) {\n- if (key.startsWith(ignorePrefix)) {\n- ignore = true;\n- break;\n- }\n- }\n- if (!ignore) {\n+ if (!key.startsWith(ignorePrefix)) {\n map.put(key.substring(prefix.length()), value);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java", "status": "modified" }, { "diff": "@@ -38,10 +38,12 @@\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.ArrayList;\n+import java.util.HashMap;\n import java.util.HashSet;\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n+import java.util.stream.Collectors;\n \n import static org.elasticsearch.common.Strings.cleanPath;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n@@ -52,8 +54,8 @@\n public class InternalSettingsPreparer {\n \n private static final String[] ALLOWED_SUFFIXES = {\".yml\", \".yaml\", \".json\", \".properties\"};\n- static final String[] PROPERTY_PREFIXES = {\"es.\", \"elasticsearch.\"};\n- static final String[] PROPERTY_DEFAULTS_PREFIXES = {\"es.default.\", \"elasticsearch.default.\"};\n+ static final String PROPERTY_PREFIX = \"es.\";\n+ static final String PROPERTY_DEFAULTS_PREFIX = \"es.default.\";\n \n public static final String SECRET_PROMPT_VALUE = \"${prompt.secret}\";\n public static final String TEXT_PROMPT_VALUE = \"${prompt.text}\";\n@@ -124,13 +126,9 @@ private static void initializeSettings(Settings.Builder output, Settings input,\n output.put(input);\n if (useSystemProperties(input)) {\n if (loadDefaults) {\n- for (String prefix : PROPERTY_DEFAULTS_PREFIXES) {\n- output.putProperties(prefix, BootstrapInfo.getSystemProperties());\n- }\n- }\n- for (String prefix : PROPERTY_PREFIXES) {\n- output.putProperties(prefix, BootstrapInfo.getSystemProperties(), PROPERTY_DEFAULTS_PREFIXES);\n+ output.putProperties(PROPERTY_DEFAULTS_PREFIX, BootstrapInfo.getSystemProperties());\n }\n+ output.putProperties(PROPERTY_PREFIX, BootstrapInfo.getSystemProperties(), PROPERTY_DEFAULTS_PREFIX);\n }\n output.replacePropertyPlaceholders();\n }", "filename": "core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java", "status": "modified" }, { "diff": "@@ -0,0 +1,2 @@\n+\"-h\" is not a valid option\n+command must be one of \"start\" or \"version\" but was \"-h\"", "filename": "core/src/main/resources/org/elasticsearch/bootstrap/elasticsearch-h.help", "status": "added" }, { "diff": "@@ -1,28 +1,4 @@\n-NAME\n-\n- start - Start Elasticsearch\n-\n-SYNOPSIS\n-\n- elasticsearch start\n-\n-DESCRIPTION\n-\n- This command starts Elasticsearch. You can configure it to run in the foreground, write a pid file\n- and configure arbitrary options that override file-based configuration.\n-\n-OPTIONS\n-\n- -h,--help Shows this message\n-\n- -p,--pidfile <pidfile> Creates a pid file in the specified path on start\n-\n- -d,--daemonize Starts Elasticsearch in the background\n-\n- -Dproperty=value Configures an Elasticsearch specific property, like -Dnetwork.host=127.0.0.1\n-\n- --property=value Configures an elasticsearch specific property, like --network.host 127.0.0.1\n- --property value\n-\n- NOTE: The -d, -p, and -D arguments must appear before any --property arguments.\n-\n+ -E : configure an Elasticsearch setting (default: {})\n+ -d (--daemonize) : daemonize Elasticsearch (default: false)\n+ -h (--help) : print this message (default: false)\n+ -p (--pidfile) VAL : pid file location", "filename": "core/src/main/resources/org/elasticsearch/bootstrap/elasticsearch-start.help", "status": "modified" }, { "diff": "@@ -1,16 +1 @@\n-NAME\n-\n- version - Show version information and exit\n-\n-SYNOPSIS\n-\n- elasticsearch version\n-\n-DESCRIPTION\n-\n- This command shows Elasticsearch version, timestamp and build information as well as JVM info\n-\n-OPTIONS\n-\n- -h,--help Shows this message\n-\n+ -h (--help) : print this message (default: false)", "filename": "core/src/main/resources/org/elasticsearch/bootstrap/elasticsearch-version.help", "status": "modified" }, { "diff": "@@ -1,22 +1,2 @@\n-NAME\n-\n- elasticsearch - Manages elasticsearch\n-\n-SYNOPSIS\n-\n- elasticsearch <command>\n-\n-DESCRIPTION\n-\n- Start an elasticsearch node\n-\n-COMMANDS\n-\n- start Start elasticsearch\n-\n- version Show version information and exit\n-\n-NOTES\n-\n- [*] For usage help on specific commands please type \"elasticsearch <command> -h\"\n-\n+\"--help\" is not a valid option\n+command must be one of \"start\" or \"version\" but was \"--help\"", "filename": "core/src/main/resources/org/elasticsearch/bootstrap/elasticsearch.help", "status": "modified" }, { "diff": "@@ -0,0 +1 @@\n+bd87a75374a6d6523de82fef51fc3cfe9baf9fc9\n\\ No newline at end of file", "filename": "distribution/licenses/args4j-2.33.jar.sha1", "status": "added" }, { "diff": "@@ -0,0 +1,22 @@\n+Copyright (c) 2003, Kohsuke Kawaguchi\n+\n+Permission is hereby granted, free of charge, to any person\n+obtaining a copy of this software and associated documentation\n+files (the \"Software\"), to deal in the Software without\n+restriction, including without limitation the rights to use,\n+copy, modify, merge, publish, distribute, sublicense, and/or sell\n+copies of the Software, and to permit persons to whom the\n+Software is furnished to do so, subject to the following\n+conditions:\n+\n+The above copyright notice and this permission notice shall be\n+included in all copies or substantial portions of the Software.\n+\n+THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES\n+OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\n+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n+HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n+WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n+FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR\n+OTHER DEALINGS IN THE SOFTWARE.", "filename": "distribution/licenses/args4j-LICENSE.txt", "status": "added" }, { "diff": "", "filename": "distribution/licenses/args4j-NOTICE.txt", "status": "added" }, { "diff": "@@ -56,6 +56,13 @@ fi\n CDPATH=\"\"\n SCRIPT=\"$0\"\n \n+COMMAND=\"$1\"\n+if [ \"$COMMAND\" = \"start\" ] || [ \"$COMMAND\" = \"version\" ]; then\n+ shift\n+else\n+ COMMAND=\"start\"\n+fi\n+\n # SCRIPT may be an arbitrarily deep series of symlinks. Loop until we have the concrete path.\n while [ -h \"$SCRIPT\" ] ; do\n ls=`ls -ld \"$SCRIPT\"`\n@@ -131,24 +138,29 @@ esac\n HOSTNAME=`hostname | cut -d. -f1`\n export HOSTNAME\n \n-# manual parsing to find out, if process should be detached\n-daemonized=`echo $* | egrep -- '(^-d |-d$| -d |--daemonize$|--daemonize )'`\n-if [ -z \"$daemonized\" ] ; then\n- exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" \\\n- org.elasticsearch.bootstrap.Elasticsearch start \"$@\"\n-else\n- exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" \\\n- org.elasticsearch.bootstrap.Elasticsearch start \"$@\" <&- &\n- retval=$?\n- pid=$!\n- [ $retval -eq 0 ] || exit $retval\n- if [ ! -z \"$ES_STARTUP_SLEEP_TIME\" ]; then\n- sleep $ES_STARTUP_SLEEP_TIME\n- fi\n- if ! ps -p $pid > /dev/null ; then\n- exit 1\n+if [ \"$COMMAND\" = \"start\" ]; then\n+ # manual parsing to find out, if process should be detached\n+ daemonized=`echo $* | egrep -- '(^-d |-d$| -d |--daemonize$|--daemonize )'`\n+ if [ -z \"$daemonized\" ]; then\n+ exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -cp \"$ES_CLASSPATH\" \\\n+ org.elasticsearch.bootstrap.Elasticsearch start --path.home \"$ES_HOME\" \"$@\"\n+ else\n+ exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -cp \"$ES_CLASSPATH\" \\\n+ org.elasticsearch.bootstrap.Elasticsearch start --path.home \"$ES_HOME\" \"$@\" <&- &\n+ retval=$?\n+ pid=$!\n+ [ $retval -eq 0 ] || exit $retval\n+ if [ ! -z \"$ES_STARTUP_SLEEP_TIME\" ]; then\n+ sleep $ES_STARTUP_SLEEP_TIME\n+ fi\n+ if ! ps -p $pid > /dev/null ; then\n+ exit 1\n+ fi\n+ exit 0\n fi\n- exit 0\n+else\n+ exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -cp \"$ES_CLASSPATH\" \\\n+ org.elasticsearch.bootstrap.Elasticsearch version \"$@\"\n fi\n \n exit $?", "filename": "distribution/src/main/resources/bin/elasticsearch", "status": "modified" }, { "diff": "@@ -163,7 +163,7 @@ As mentioned previously, we can override either the cluster or node name. This c\n \n [source,sh]\n --------------------------------------------------\n-./elasticsearch --cluster.name my_cluster_name --node.name my_node_name\n+./elasticsearch -E es.cluster.name=my_cluster_name -E es.node.name=my_node_name\n --------------------------------------------------\n \n Also note the line marked http with information about the HTTP address (`192.168.8.112`) and port (`9200`) that our node is reachable from. By default, Elasticsearch uses port `9200` to provide access to its REST API. This port is configurable if necessary.", "filename": "docs/reference/getting-started.asciidoc", "status": "modified" }, { "diff": "@@ -14,7 +14,7 @@ attribute as follows:\n \n [source,sh]\n ------------------------\n-bin/elasticsearch --node.rack rack1 --node.size big <1>\n+bin/elasticsearch -E es.node.rack=rack1 -E es.node.size=big <1>\n ------------------------\n <1> These attribute settings can also be specified in the `elasticsearch.yml` config file.\n ", "filename": "docs/reference/index-modules/allocation/filtering.asciidoc", "status": "modified" }, { "diff": "@@ -330,6 +330,22 @@ control the queue implementation used in the cluster service and the\n handling of ping responses during discovery. This was an undocumented\n setting and has been removed.\n \n+==== Using system properties to configure Elasticsearch\n+\n+Elasticsearch can be configured by setting system properties on the\n+command line via `-Des.name.of.property=value.of.property`. This will be\n+removed in a future version of Elasticsearch. Instead, use\n+`-E es.name.of.setting=value.of.setting`. Note that in all cases the\n+name of the setting must be prefixed with `es.`.\n+\n+==== Removed using double-dashes to configure Elasticsearch\n+\n+Elasticsearch could previously be configured on the command line by\n+setting settings via `--name.of.setting value.of.setting`. This feature\n+has been removed. Instead, use\n+`-E es.name.of.setting=value.of.setting`. Note that in all cases the\n+name of the setting must be prefixed with `es.`.\n+\n [[breaking_30_mapping_changes]]\n === Mapping changes\n ", "filename": "docs/reference/migration/migrate_3_0.asciidoc", "status": "modified" }, { "diff": "@@ -21,7 +21,7 @@ attribute called `rack_id` -- we could use any attribute name. For example:\n \n [source,sh]\n ----------------------\n-./bin/elasticsearch --node.rack_id rack_one <1>\n+./bin/elasticsearch -E es.node.rack_id=rack_one <1>\n ----------------------\n <1> This setting could also be specified in the `elasticsearch.yml` config file.\n ", "filename": "docs/reference/modules/cluster/allocation_awareness.asciidoc", "status": "modified" }, { "diff": "@@ -233,7 +233,7 @@ Like all node settings, it can also be specified on the command line as:\n \n [source,sh]\n -----------------------\n-./bin/elasticsearch --path.data /var/elasticsearch/data\n+./bin/elasticsearch -E es.path.data=/var/elasticsearch/data\n -----------------------\n \n TIP: When using the `.zip` or `.tar.gz` distributions, the `path.data` setting", "filename": "docs/reference/modules/node.asciidoc", "status": "modified" }, { "diff": "@@ -67,13 +67,13 @@ There are added features when using the `elasticsearch` shell script.\n The first, which was explained earlier, is the ability to easily run the\n process either in the foreground or the background.\n \n-Another feature is the ability to pass `-D` or getopt long style\n-configuration parameters directly to the script. When set, all override\n-anything set using either `JAVA_OPTS` or `ES_JAVA_OPTS`. For example:\n+Another feature is the ability to pass `-E` configuration parameters\n+directly to the script. When set, all override anything set using\n+either `JAVA_OPTS` or `ES_JAVA_OPTS`. For example:\n \n [source,sh]\n --------------------------------------------------\n-$ bin/elasticsearch -Des.index.refresh_interval=5s --node.name=my-node\n+$ bin/elasticsearch -E es.index.refresh_interval=5s -E node.name=my-node\n --------------------------------------------------\n *************************************************************************\n ", "filename": "docs/reference/setup.asciidoc", "status": "modified" }, { "diff": "@@ -252,7 +252,7 @@ command, for example:\n \n [source,sh]\n --------------------------------------------------\n-$ elasticsearch -Des.network.host=10.0.0.4\n+$ elasticsearch -E es.network.host=10.0.0.4\n --------------------------------------------------\n \n Another option is to set `es.default.` prefix instead of `es.` prefix,\n@@ -329,7 +329,7 @@ above can also be set as a \"collapsed\" setting, for example:\n \n [source,sh]\n --------------------------------------------------\n-$ elasticsearch -Des.index.refresh_interval=5s\n+$ elasticsearch -E es.index.refresh_interval=5s\n --------------------------------------------------\n \n All of the index level configuration can be found within each", "filename": "docs/reference/setup/configuration.asciidoc", "status": "modified" }, { "diff": "@@ -80,7 +80,7 @@ To upgrade using a zip or compressed tarball:\n overwrite the `config` or `data` directories.\n \n * Either copy the files in the `config` directory from your old installation\n- to your new installation, or use the `--path.conf` option on the command\n+ to your new installation, or use the `-E path.conf` option on the command\n line to point to an external config directory.\n \n * Either copy the files in the `data` directory from your old installation", "filename": "docs/reference/setup/rolling_upgrade.asciidoc", "status": "modified" }, { "diff": "@@ -28,8 +28,8 @@ dependencies {\n \n integTest {\n cluster {\n- systemProperty 'es.script.inline', 'true'\n- systemProperty 'es.script.indexed', 'true'\n+ esSetting 'es.script.inline', 'true'\n+ esSetting 'es.script.indexed', 'true'\n }\n }\n ", "filename": "modules/lang-groovy/build.gradle", "status": "modified" }, { "diff": "@@ -28,7 +28,7 @@ dependencies {\n \n integTest {\n cluster {\n- systemProperty 'es.script.inline', 'true'\n- systemProperty 'es.script.indexed', 'true'\n+ esSetting 'es.script.inline', 'true'\n+ esSetting 'es.script.indexed', 'true'\n }\n }", "filename": "modules/lang-mustache/build.gradle", "status": "modified" } ] }
{ "body": "A standard GET by ID results in a single log entry in the audit log. Search (_search) results in two identical log entries. Every time. \n- Default ES 1.6.0 installation. \n- Added latest Shield. \n- Added \"search_admin\" user. \n\nEverything works great out of the box except I get _search audits duplicated in the log. ONLY \"SearchRequest\" audits. I haven't found any other api yet which results in this strange behavior.\n\n```\n[2015-06-16 18:37:24,705] [esdev-shieldpoc01] [transport] [access_granted] origin_type=[rest], origin_address=[/10.30.24.36:55308], principal=[search_admin], action=[indices:data/read/search], indices=[test], request=[SearchRequest]\n[2015-06-16 18:37:24,705] [esdev-shieldpoc01] [transport] [access_granted] origin_type=[rest], origin_address=[/10.30.24.36:55308], principal=[search_admin], action=[indices:data/read/search], indices=[test], request=[SearchRequest]\n```\n", "comments": [ { "body": "@jaymode please could you take a look?\n", "created_at": "2015-06-18T12:58:41Z" }, { "body": "@bobbyhubbard thanks for reporting this. This behavior isn't ideal and we'll look at if/how we can improve this. What you're seeing is a side effect of how auditing is implemented and how search requests are executed. Shield audits the individual actions that are executed by elasticsearch. \n\nThe `indices:data/read/search` action name corresponds to multiple actions for search. The reason that you are seeing the message twice, is the API executes a `TransportSearchAction`, which in turn executes a action corresponding to the search type; the default type `query_then_fetch` corresponds to the `TransportSearchQueryThenFetchAction` and the execution of this action is what causes the second log message.\n", "created_at": "2015-06-19T12:53:30Z" }, { "body": "This seems to have to do with the fact that the main `TransportSearchAction` uses an inner `TransportSearchTypeAction` (there is a different impl for each search type). Last time I checked I noticed some code that gets executed twice while it shouldn't (e.g. request validation), and a side effect is also the double audit log line. Maybe this inner action shouldn't be a transport action after all? The two (outer and inner) execute calls happen on the same node all the time, seems like also the transport handler registration that happens in `TransportSearchTypeAction` has no actual effect as it's already registered by `TransportSearchAction` with same name, so only used for audit logging.\n", "created_at": "2015-06-19T14:26:12Z" }, { "body": "This bit us again today when someone else in our org setup a new log drain from shield. It reported nearly double the number of rest requests as expected. Then I remembered this issue... The workaround is simple enough...to hash each message (fingerprint in logstash) and use the hash as the message id. But this WILL bite every single Shield customer who is measuring and auditing rest calls. (How many are reporting invalid results now because they dont even know about this bug like one team here almost did?)\n", "created_at": "2015-11-18T23:20:30Z" } ], "number": 11710, "title": "Shield duplicates _search api audits" }
{ "body": "TransportSearchTypeAction and subclasses are not actually transport actions, but just support classes useful for their inner async actions that can easily be extracted out so that we get rid of one too many level of abstraction.\n\nSame pattern can be applied to TransportSearchScrollQueryAndFetchAction & TransportSearchScrollQueryThenFetchAction which we could remove in favour of keeping only their inner classes named SearchScrollQueryAndFetchAsyncAction and SearchScrollQueryThenFetchAsyncAction.\n\nRemove org.elasticsearch.action.search.type package, collapsed remaining classes into existing org.elasticsearch.action.search package\n\nMake also ParsedScrollId ScrollIdForNode and TransportSearchHelper classes and their methods package private.\n\nCloses #11710\n", "number": 16758, "review_comments": [ { "body": "indentation looks off here\n", "created_at": "2016-02-26T12:35:46Z" } ], "title": "Cleanup search sub transport actions and collapse o.e.action.search.type package into o.e.action.search" }
{ "commits": [ { "message": "Cleanup search sub transport actions and collapse o.e.action.search.type package into o.e.action.search\n\nTransportSearchTypeAction and subclasses are not actually transport actions, but just support classes useful for their inner async actions that can easily be extracted out so that we get rid of one too many level of abstraction.\n\nSame pattern can be applied to TransportSearchScrollQueryAndFetchAction & TransportSearchScrollQueryThenFetchAction which we could remove in favour of keeping only their inner classes named SearchScrollQueryAndFetchAsyncAction and SearchScrollQueryThenFetchAsyncAction.\n\nRemove org.elasticsearch.action.search.type package, collapsed remaining classes into existing org.elasticsearch.action.search package\n\nMake also ParsedScrollId ScrollIdForNode and TransportSearchHelper classes and their methods package private.\n\nCloses #11710" } ], "files": [ { "diff": "@@ -207,16 +207,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]ShardSearchFailure.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]TransportClearScrollAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]TransportMultiSearchAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]TransportSearchAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]TransportSearchScrollAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]type[/\\\\]TransportSearchDfsQueryAndFetchAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]type[/\\\\]TransportSearchDfsQueryThenFetchAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]type[/\\\\]TransportSearchHelper.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]type[/\\\\]TransportSearchQueryAndFetchAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]type[/\\\\]TransportSearchQueryThenFetchAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]type[/\\\\]TransportSearchScrollQueryAndFetchAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]type[/\\\\]TransportSearchScrollQueryThenFetchAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]type[/\\\\]TransportSearchTypeAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]suggest[/\\\\]SuggestResponse.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]suggest[/\\\\]TransportSuggestAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]support[/\\\\]ActionFilter.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -174,12 +174,6 @@\n import org.elasticsearch.action.search.TransportMultiSearchAction;\n import org.elasticsearch.action.search.TransportSearchAction;\n import org.elasticsearch.action.search.TransportSearchScrollAction;\n-import org.elasticsearch.action.search.type.TransportSearchDfsQueryAndFetchAction;\n-import org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction;\n-import org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction;\n-import org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction;\n-import org.elasticsearch.action.search.type.TransportSearchScrollQueryAndFetchAction;\n-import org.elasticsearch.action.search.type.TransportSearchScrollQueryThenFetchAction;\n import org.elasticsearch.action.suggest.SuggestAction;\n import org.elasticsearch.action.suggest.TransportSuggestAction;\n import org.elasticsearch.action.support.ActionFilter;\n@@ -333,16 +327,8 @@ protected void configure() {\n TransportShardMultiGetAction.class);\n registerAction(BulkAction.INSTANCE, TransportBulkAction.class,\n TransportShardBulkAction.class);\n- registerAction(SearchAction.INSTANCE, TransportSearchAction.class,\n- TransportSearchDfsQueryThenFetchAction.class,\n- TransportSearchQueryThenFetchAction.class,\n- TransportSearchDfsQueryAndFetchAction.class,\n- TransportSearchQueryAndFetchAction.class\n- );\n- registerAction(SearchScrollAction.INSTANCE, TransportSearchScrollAction.class,\n- TransportSearchScrollQueryThenFetchAction.class,\n- TransportSearchScrollQueryAndFetchAction.class\n- );\n+ registerAction(SearchAction.INSTANCE, TransportSearchAction.class);\n+ registerAction(SearchScrollAction.INSTANCE, TransportSearchScrollAction.class);\n registerAction(MultiSearchAction.INSTANCE, TransportMultiSearchAction.class);\n registerAction(PercolateAction.INSTANCE, TransportPercolateAction.class);\n registerAction(MultiPercolateAction.INSTANCE, TransportMultiPercolateAction.class, TransportShardMultiPercolateAction.class);", "filename": "core/src/main/java/org/elasticsearch/action/ActionModule.java", "status": "modified" }, { "diff": "@@ -0,0 +1,393 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+import com.carrotsearch.hppc.IntArrayList;\n+import org.apache.lucene.search.ScoreDoc;\n+import org.apache.lucene.search.TopDocs;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.NoShardAvailableActionException;\n+import org.elasticsearch.action.support.TransportActions;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.GroupShardsIterator;\n+import org.elasticsearch.cluster.routing.ShardIterator;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.search.SearchPhaseResult;\n+import org.elasticsearch.search.SearchShardTarget;\n+import org.elasticsearch.search.action.SearchServiceTransportAction;\n+import org.elasticsearch.search.controller.SearchPhaseController;\n+import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n+import org.elasticsearch.search.internal.InternalSearchResponse;\n+import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n+import org.elasticsearch.search.query.QuerySearchResult;\n+import org.elasticsearch.search.query.QuerySearchResultProvider;\n+import org.elasticsearch.threadpool.ThreadPool;\n+\n+import java.util.List;\n+import java.util.Map;\n+import java.util.Set;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+import static org.elasticsearch.action.search.TransportSearchHelper.internalSearchRequest;\n+\n+abstract class AbstractSearchAsyncAction<FirstResult extends SearchPhaseResult> extends AbstractAsyncAction {\n+\n+ protected final ESLogger logger;\n+ protected final SearchServiceTransportAction searchService;\n+ private final IndexNameExpressionResolver indexNameExpressionResolver;\n+ protected final SearchPhaseController searchPhaseController;\n+ protected final ThreadPool threadPool;\n+ protected final ActionListener<SearchResponse> listener;\n+ protected final GroupShardsIterator shardsIts;\n+ protected final SearchRequest request;\n+ protected final ClusterState clusterState;\n+ protected final DiscoveryNodes nodes;\n+ protected final int expectedSuccessfulOps;\n+ private final int expectedTotalOps;\n+ protected final AtomicInteger successfulOps = new AtomicInteger();\n+ private final AtomicInteger totalOps = new AtomicInteger();\n+ protected final AtomicArray<FirstResult> firstResults;\n+ private volatile AtomicArray<ShardSearchFailure> shardFailures;\n+ private final Object shardFailuresMutex = new Object();\n+ protected volatile ScoreDoc[] sortedShardList;\n+\n+ protected AbstractSearchAsyncAction(ESLogger logger, SearchServiceTransportAction searchService, ClusterService clusterService,\n+ IndexNameExpressionResolver indexNameExpressionResolver,\n+ SearchPhaseController searchPhaseController, ThreadPool threadPool, SearchRequest request,\n+ ActionListener<SearchResponse> listener) {\n+ this.logger = logger;\n+ this.searchService = searchService;\n+ this.indexNameExpressionResolver = indexNameExpressionResolver;\n+ this.searchPhaseController = searchPhaseController;\n+ this.threadPool = threadPool;\n+ this.request = request;\n+ this.listener = listener;\n+\n+ this.clusterState = clusterService.state();\n+ nodes = clusterState.nodes();\n+\n+ clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ);\n+\n+ // TODO: I think startTime() should become part of ActionRequest and that should be used both for index name\n+ // date math expressions and $now in scripts. This way all apis will deal with now in the same way instead\n+ // of just for the _search api\n+ String[] concreteIndices = indexNameExpressionResolver.concreteIndices(clusterState, request.indicesOptions(),\n+ startTime(), request.indices());\n+\n+ for (String index : concreteIndices) {\n+ clusterState.blocks().indexBlockedRaiseException(ClusterBlockLevel.READ, index);\n+ }\n+\n+ Map<String, Set<String>> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, request.routing(),\n+ request.indices());\n+\n+ shardsIts = clusterService.operationRouting().searchShards(clusterState, concreteIndices, routingMap, request.preference());\n+ expectedSuccessfulOps = shardsIts.size();\n+ // we need to add 1 for non active partition, since we count it in the total!\n+ expectedTotalOps = shardsIts.totalSizeWith1ForEmpty();\n+\n+ firstResults = new AtomicArray<>(shardsIts.size());\n+ }\n+\n+ public void start() {\n+ if (expectedSuccessfulOps == 0) {\n+ //no search shards to search on, bail with empty response\n+ //(it happens with search across _all with no indices around and consistent with broadcast operations)\n+ listener.onResponse(new SearchResponse(InternalSearchResponse.empty(), null, 0, 0, buildTookInMillis(),\n+ ShardSearchFailure.EMPTY_ARRAY));\n+ return;\n+ }\n+ int shardIndex = -1;\n+ for (final ShardIterator shardIt : shardsIts) {\n+ shardIndex++;\n+ final ShardRouting shard = shardIt.nextOrNull();\n+ if (shard != null) {\n+ performFirstPhase(shardIndex, shardIt, shard);\n+ } else {\n+ // really, no shards active in this group\n+ onFirstPhaseResult(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId()));\n+ }\n+ }\n+ }\n+\n+ void performFirstPhase(final int shardIndex, final ShardIterator shardIt, final ShardRouting shard) {\n+ if (shard == null) {\n+ // no more active shards... (we should not really get here, but just for safety)\n+ onFirstPhaseResult(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId()));\n+ } else {\n+ final DiscoveryNode node = nodes.get(shard.currentNodeId());\n+ if (node == null) {\n+ onFirstPhaseResult(shardIndex, shard, null, shardIt, new NoShardAvailableActionException(shardIt.shardId()));\n+ } else {\n+ String[] filteringAliases = indexNameExpressionResolver.filteringAliases(clusterState,\n+ shard.index().getName(), request.indices());\n+ sendExecuteFirstPhase(node, internalSearchRequest(shard, shardsIts.size(), request, filteringAliases,\n+ startTime()), new ActionListener<FirstResult>() {\n+ @Override\n+ public void onResponse(FirstResult result) {\n+ onFirstPhaseResult(shardIndex, shard, result, shardIt);\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ onFirstPhaseResult(shardIndex, shard, node.id(), shardIt, t);\n+ }\n+ });\n+ }\n+ }\n+ }\n+\n+ void onFirstPhaseResult(int shardIndex, ShardRouting shard, FirstResult result, ShardIterator shardIt) {\n+ result.shardTarget(new SearchShardTarget(shard.currentNodeId(), shard.index(), shard.id()));\n+ processFirstPhaseResult(shardIndex, result);\n+ // we need to increment successful ops first before we compare the exit condition otherwise if we\n+ // are fast we could concurrently update totalOps but then preempt one of the threads which can\n+ // cause the successor to read a wrong value from successfulOps if second phase is very fast ie. count etc.\n+ successfulOps.incrementAndGet();\n+ // increment all the \"future\" shards to update the total ops since we some may work and some may not...\n+ // and when that happens, we break on total ops, so we must maintain them\n+ final int xTotalOps = totalOps.addAndGet(shardIt.remaining() + 1);\n+ if (xTotalOps == expectedTotalOps) {\n+ try {\n+ innerMoveToSecondPhase();\n+ } catch (Throwable e) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(shardIt.shardId() + \": Failed to execute [\" + request + \"] while moving to second phase\", e);\n+ }\n+ raiseEarlyFailure(new ReduceSearchPhaseException(firstPhaseName(), \"\", e, buildShardFailures()));\n+ }\n+ } else if (xTotalOps > expectedTotalOps) {\n+ raiseEarlyFailure(new IllegalStateException(\"unexpected higher total ops [\" + xTotalOps + \"] compared \" +\n+ \"to expected [\" + expectedTotalOps + \"]\"));\n+ }\n+ }\n+\n+ void onFirstPhaseResult(final int shardIndex, @Nullable ShardRouting shard, @Nullable String nodeId,\n+ final ShardIterator shardIt, Throwable t) {\n+ // we always add the shard failure for a specific shard instance\n+ // we do make sure to clean it on a successful response from a shard\n+ SearchShardTarget shardTarget = new SearchShardTarget(nodeId, shardIt.shardId().getIndex(), shardIt.shardId().getId());\n+ addShardFailure(shardIndex, shardTarget, t);\n+\n+ if (totalOps.incrementAndGet() == expectedTotalOps) {\n+ if (logger.isDebugEnabled()) {\n+ if (t != null && !TransportActions.isShardNotAvailableException(t)) {\n+ if (shard != null) {\n+ logger.debug(shard.shortSummary() + \": Failed to execute [\" + request + \"]\", t);\n+ } else {\n+ logger.debug(shardIt.shardId() + \": Failed to execute [\" + request + \"]\", t);\n+ }\n+ } else if (logger.isTraceEnabled()) {\n+ logger.trace(\"{}: Failed to execute [{}]\", t, shard, request);\n+ }\n+ }\n+ final ShardSearchFailure[] shardSearchFailures = buildShardFailures();\n+ if (successfulOps.get() == 0) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"All shards failed for phase: [{}]\", t, firstPhaseName());\n+ }\n+\n+ // no successful ops, raise an exception\n+ raiseEarlyFailure(new SearchPhaseExecutionException(firstPhaseName(), \"all shards failed\", t, shardSearchFailures));\n+ } else {\n+ try {\n+ innerMoveToSecondPhase();\n+ } catch (Throwable e) {\n+ raiseEarlyFailure(new ReduceSearchPhaseException(firstPhaseName(), \"\", e, shardSearchFailures));\n+ }\n+ }\n+ } else {\n+ final ShardRouting nextShard = shardIt.nextOrNull();\n+ final boolean lastShard = nextShard == null;\n+ // trace log this exception\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(executionFailureMsg(shard, shardIt, request, lastShard), t);\n+ }\n+ if (!lastShard) {\n+ try {\n+ performFirstPhase(shardIndex, shardIt, nextShard);\n+ } catch (Throwable t1) {\n+ onFirstPhaseResult(shardIndex, shard, shard.currentNodeId(), shardIt, t1);\n+ }\n+ } else {\n+ // no more shards active, add a failure\n+ if (logger.isDebugEnabled() && !logger.isTraceEnabled()) { // do not double log this exception\n+ if (t != null && !TransportActions.isShardNotAvailableException(t)) {\n+ logger.debug(executionFailureMsg(shard, shardIt, request, lastShard), t);\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n+ private String executionFailureMsg(@Nullable ShardRouting shard, final ShardIterator shardIt, SearchRequest request,\n+ boolean lastShard) {\n+ if (shard != null) {\n+ return shard.shortSummary() + \": Failed to execute [\" + request + \"] lastShard [\" + lastShard + \"]\";\n+ } else {\n+ return shardIt.shardId() + \": Failed to execute [\" + request + \"] lastShard [\" + lastShard + \"]\";\n+ }\n+ }\n+\n+ protected final ShardSearchFailure[] buildShardFailures() {\n+ AtomicArray<ShardSearchFailure> shardFailures = this.shardFailures;\n+ if (shardFailures == null) {\n+ return ShardSearchFailure.EMPTY_ARRAY;\n+ }\n+ List<AtomicArray.Entry<ShardSearchFailure>> entries = shardFailures.asList();\n+ ShardSearchFailure[] failures = new ShardSearchFailure[entries.size()];\n+ for (int i = 0; i < failures.length; i++) {\n+ failures[i] = entries.get(i).value;\n+ }\n+ return failures;\n+ }\n+\n+ protected final void addShardFailure(final int shardIndex, @Nullable SearchShardTarget shardTarget, Throwable t) {\n+ // we don't aggregate shard failures on non active shards (but do keep the header counts right)\n+ if (TransportActions.isShardNotAvailableException(t)) {\n+ return;\n+ }\n+\n+ // lazily create shard failures, so we can early build the empty shard failure list in most cases (no failures)\n+ if (shardFailures == null) {\n+ synchronized (shardFailuresMutex) {\n+ if (shardFailures == null) {\n+ shardFailures = new AtomicArray<>(shardsIts.size());\n+ }\n+ }\n+ }\n+ ShardSearchFailure failure = shardFailures.get(shardIndex);\n+ if (failure == null) {\n+ shardFailures.set(shardIndex, new ShardSearchFailure(t, shardTarget));\n+ } else {\n+ // the failure is already present, try and not override it with an exception that is less meaningless\n+ // for example, getting illegal shard state\n+ if (TransportActions.isReadOverrideException(t)) {\n+ shardFailures.set(shardIndex, new ShardSearchFailure(t, shardTarget));\n+ }\n+ }\n+ }\n+\n+ private void raiseEarlyFailure(Throwable t) {\n+ for (AtomicArray.Entry<FirstResult> entry : firstResults.asList()) {\n+ try {\n+ DiscoveryNode node = nodes.get(entry.value.shardTarget().nodeId());\n+ sendReleaseSearchContext(entry.value.id(), node);\n+ } catch (Throwable t1) {\n+ logger.trace(\"failed to release context\", t1);\n+ }\n+ }\n+ listener.onFailure(t);\n+ }\n+\n+ /**\n+ * Releases shard targets that are not used in the docsIdsToLoad.\n+ */\n+ protected void releaseIrrelevantSearchContexts(AtomicArray<? extends QuerySearchResultProvider> queryResults,\n+ AtomicArray<IntArrayList> docIdsToLoad) {\n+ if (docIdsToLoad == null) {\n+ return;\n+ }\n+ // we only release search context that we did not fetch from if we are not scrolling\n+ if (request.scroll() == null) {\n+ for (AtomicArray.Entry<? extends QuerySearchResultProvider> entry : queryResults.asList()) {\n+ final TopDocs topDocs = entry.value.queryResult().queryResult().topDocs();\n+ if (topDocs != null && topDocs.scoreDocs.length > 0 // the shard had matches\n+ && docIdsToLoad.get(entry.index) == null) { // but none of them made it to the global top docs\n+ try {\n+ DiscoveryNode node = nodes.get(entry.value.queryResult().shardTarget().nodeId());\n+ sendReleaseSearchContext(entry.value.queryResult().id(), node);\n+ } catch (Throwable t1) {\n+ logger.trace(\"failed to release context\", t1);\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n+ protected void sendReleaseSearchContext(long contextId, DiscoveryNode node) {\n+ if (node != null) {\n+ searchService.sendFreeContext(node, contextId, request);\n+ }\n+ }\n+\n+ protected ShardFetchSearchRequest createFetchRequest(QuerySearchResult queryResult, AtomicArray.Entry<IntArrayList> entry,\n+ ScoreDoc[] lastEmittedDocPerShard) {\n+ if (lastEmittedDocPerShard != null) {\n+ ScoreDoc lastEmittedDoc = lastEmittedDocPerShard[entry.index];\n+ return new ShardFetchSearchRequest(request, queryResult.id(), entry.value, lastEmittedDoc);\n+ } else {\n+ return new ShardFetchSearchRequest(request, queryResult.id(), entry.value);\n+ }\n+ }\n+\n+ protected abstract void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request,\n+ ActionListener<FirstResult> listener);\n+\n+ protected final void processFirstPhaseResult(int shardIndex, FirstResult result) {\n+ firstResults.set(shardIndex, result);\n+\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"got first-phase result from {}\", result != null ? result.shardTarget() : null);\n+ }\n+\n+ // clean a previous error on this shard group (note, this code will be serialized on the same shardIndex value level\n+ // so its ok concurrency wise to miss potentially the shard failures being created because of another failure\n+ // in the #addShardFailure, because by definition, it will happen on *another* shardIndex\n+ AtomicArray<ShardSearchFailure> shardFailures = this.shardFailures;\n+ if (shardFailures != null) {\n+ shardFailures.set(shardIndex, null);\n+ }\n+ }\n+\n+ final void innerMoveToSecondPhase() throws Exception {\n+ if (logger.isTraceEnabled()) {\n+ StringBuilder sb = new StringBuilder();\n+ boolean hadOne = false;\n+ for (int i = 0; i < firstResults.length(); i++) {\n+ FirstResult result = firstResults.get(i);\n+ if (result == null) {\n+ continue; // failure\n+ }\n+ if (hadOne) {\n+ sb.append(\",\");\n+ } else {\n+ hadOne = true;\n+ }\n+ sb.append(result.shardTarget());\n+ }\n+\n+ logger.trace(\"Moving to second phase, based on results from: {} (cluster state version: {})\", sb, clusterState.version());\n+ }\n+ moveToSecondPhase();\n+ }\n+\n+ protected abstract void moveToSecondPhase() throws Exception;\n+\n+ protected abstract String firstPhaseName();\n+}", "filename": "core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java", "status": "added" }, { "diff": "@@ -0,0 +1,142 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.ActionRunnable;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.search.action.SearchServiceTransportAction;\n+import org.elasticsearch.search.controller.SearchPhaseController;\n+import org.elasticsearch.search.dfs.AggregatedDfs;\n+import org.elasticsearch.search.dfs.DfsSearchResult;\n+import org.elasticsearch.search.fetch.QueryFetchSearchResult;\n+import org.elasticsearch.search.internal.InternalSearchResponse;\n+import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n+import org.elasticsearch.search.query.QuerySearchRequest;\n+import org.elasticsearch.threadpool.ThreadPool;\n+\n+import java.io.IOException;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+class SearchDfsQueryAndFetchAsyncAction extends AbstractSearchAsyncAction<DfsSearchResult> {\n+\n+ private final AtomicArray<QueryFetchSearchResult> queryFetchResults;\n+\n+ SearchDfsQueryAndFetchAsyncAction(ESLogger logger, SearchServiceTransportAction searchService,\n+ ClusterService clusterService, IndexNameExpressionResolver indexNameExpressionResolver,\n+ SearchPhaseController searchPhaseController, ThreadPool threadPool,\n+ SearchRequest request, ActionListener<SearchResponse> listener) {\n+ super(logger, searchService, clusterService, indexNameExpressionResolver, searchPhaseController, threadPool, request, listener);\n+ queryFetchResults = new AtomicArray<>(firstResults.length());\n+ }\n+\n+ @Override\n+ protected String firstPhaseName() {\n+ return \"dfs\";\n+ }\n+\n+ @Override\n+ protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request,\n+ ActionListener<DfsSearchResult> listener) {\n+ searchService.sendExecuteDfs(node, request, listener);\n+ }\n+\n+ @Override\n+ protected void moveToSecondPhase() {\n+ final AggregatedDfs dfs = searchPhaseController.aggregateDfs(firstResults);\n+ final AtomicInteger counter = new AtomicInteger(firstResults.asList().size());\n+\n+ for (final AtomicArray.Entry<DfsSearchResult> entry : firstResults.asList()) {\n+ DfsSearchResult dfsResult = entry.value;\n+ DiscoveryNode node = nodes.get(dfsResult.shardTarget().nodeId());\n+ QuerySearchRequest querySearchRequest = new QuerySearchRequest(request, dfsResult.id(), dfs);\n+ executeSecondPhase(entry.index, dfsResult, counter, node, querySearchRequest);\n+ }\n+ }\n+\n+ void executeSecondPhase(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter,\n+ final DiscoveryNode node, final QuerySearchRequest querySearchRequest) {\n+ searchService.sendExecuteFetch(node, querySearchRequest, new ActionListener<QueryFetchSearchResult>() {\n+ @Override\n+ public void onResponse(QueryFetchSearchResult result) {\n+ result.shardTarget(dfsResult.shardTarget());\n+ queryFetchResults.set(shardIndex, result);\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ try {\n+ onSecondPhaseFailure(t, querySearchRequest, shardIndex, dfsResult, counter);\n+ } finally {\n+ // the query might not have been executed at all (for example because thread pool rejected execution)\n+ // and the search context that was created in dfs phase might not be released.\n+ // release it again to be in the safe side\n+ sendReleaseSearchContext(querySearchRequest.id(), node);\n+ }\n+ }\n+ });\n+ }\n+\n+ void onSecondPhaseFailure(Throwable t, QuerySearchRequest querySearchRequest, int shardIndex, DfsSearchResult dfsResult,\n+ AtomicInteger counter) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"[{}] Failed to execute query phase\", t, querySearchRequest.id());\n+ }\n+ this.addShardFailure(shardIndex, dfsResult.shardTarget(), t);\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+\n+ private void finishHim() {\n+ threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable<SearchResponse>(listener) {\n+ @Override\n+ public void doRun() throws IOException {\n+ sortedShardList = searchPhaseController.sortDocs(true, queryFetchResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryFetchResults,\n+ queryFetchResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(),\n+ buildTookInMillis(), buildShardFailures()));\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"query_fetch\", \"\", t, buildShardFailures());\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"failed to reduce search\", failure);\n+ }\n+ super.onFailure(t);\n+ }\n+ });\n+\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryAndFetchAsyncAction.java", "status": "added" }, { "diff": "@@ -0,0 +1,223 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+import com.carrotsearch.hppc.IntArrayList;\n+import org.apache.lucene.search.ScoreDoc;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.ActionRunnable;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.search.SearchShardTarget;\n+import org.elasticsearch.search.action.SearchServiceTransportAction;\n+import org.elasticsearch.search.controller.SearchPhaseController;\n+import org.elasticsearch.search.dfs.AggregatedDfs;\n+import org.elasticsearch.search.dfs.DfsSearchResult;\n+import org.elasticsearch.search.fetch.FetchSearchResult;\n+import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n+import org.elasticsearch.search.internal.InternalSearchResponse;\n+import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n+import org.elasticsearch.search.query.QuerySearchRequest;\n+import org.elasticsearch.search.query.QuerySearchResult;\n+import org.elasticsearch.threadpool.ThreadPool;\n+\n+import java.io.IOException;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction<DfsSearchResult> {\n+\n+ final AtomicArray<QuerySearchResult> queryResults;\n+ final AtomicArray<FetchSearchResult> fetchResults;\n+ final AtomicArray<IntArrayList> docIdsToLoad;\n+\n+ SearchDfsQueryThenFetchAsyncAction(ESLogger logger, SearchServiceTransportAction searchService,\n+ ClusterService clusterService, IndexNameExpressionResolver indexNameExpressionResolver,\n+ SearchPhaseController searchPhaseController, ThreadPool threadPool,\n+ SearchRequest request, ActionListener<SearchResponse> listener) {\n+ super(logger, searchService, clusterService, indexNameExpressionResolver, searchPhaseController, threadPool, request, listener);\n+ queryResults = new AtomicArray<>(firstResults.length());\n+ fetchResults = new AtomicArray<>(firstResults.length());\n+ docIdsToLoad = new AtomicArray<>(firstResults.length());\n+ }\n+\n+ @Override\n+ protected String firstPhaseName() {\n+ return \"dfs\";\n+ }\n+\n+ @Override\n+ protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request,\n+ ActionListener<DfsSearchResult> listener) {\n+ searchService.sendExecuteDfs(node, request, listener);\n+ }\n+\n+ @Override\n+ protected void moveToSecondPhase() {\n+ final AggregatedDfs dfs = searchPhaseController.aggregateDfs(firstResults);\n+ final AtomicInteger counter = new AtomicInteger(firstResults.asList().size());\n+ for (final AtomicArray.Entry<DfsSearchResult> entry : firstResults.asList()) {\n+ DfsSearchResult dfsResult = entry.value;\n+ DiscoveryNode node = nodes.get(dfsResult.shardTarget().nodeId());\n+ QuerySearchRequest querySearchRequest = new QuerySearchRequest(request, dfsResult.id(), dfs);\n+ executeQuery(entry.index, dfsResult, counter, querySearchRequest, node);\n+ }\n+ }\n+\n+ void executeQuery(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter,\n+ final QuerySearchRequest querySearchRequest, final DiscoveryNode node) {\n+ searchService.sendExecuteQuery(node, querySearchRequest, new ActionListener<QuerySearchResult>() {\n+ @Override\n+ public void onResponse(QuerySearchResult result) {\n+ result.shardTarget(dfsResult.shardTarget());\n+ queryResults.set(shardIndex, result);\n+ if (counter.decrementAndGet() == 0) {\n+ executeFetchPhase();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ try {\n+ onQueryFailure(t, querySearchRequest, shardIndex, dfsResult, counter);\n+ } finally {\n+ // the query might not have been executed at all (for example because thread pool rejected\n+ // execution) and the search context that was created in dfs phase might not be released.\n+ // release it again to be in the safe side\n+ sendReleaseSearchContext(querySearchRequest.id(), node);\n+ }\n+ }\n+ });\n+ }\n+\n+ void onQueryFailure(Throwable t, QuerySearchRequest querySearchRequest, int shardIndex, DfsSearchResult dfsResult,\n+ AtomicInteger counter) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"[{}] Failed to execute query phase\", t, querySearchRequest.id());\n+ }\n+ this.addShardFailure(shardIndex, dfsResult.shardTarget(), t);\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ if (successfulOps.get() == 0) {\n+ listener.onFailure(new SearchPhaseExecutionException(\"query\", \"all shards failed\", buildShardFailures()));\n+ } else {\n+ executeFetchPhase();\n+ }\n+ }\n+ }\n+\n+ void executeFetchPhase() {\n+ try {\n+ innerExecuteFetchPhase();\n+ } catch (Throwable e) {\n+ listener.onFailure(new ReduceSearchPhaseException(\"query\", \"\", e, buildShardFailures()));\n+ }\n+ }\n+\n+ void innerExecuteFetchPhase() throws Exception {\n+ boolean useScroll = request.scroll() != null;\n+ sortedShardList = searchPhaseController.sortDocs(useScroll, queryResults);\n+ searchPhaseController.fillDocIdsToLoad(docIdsToLoad, sortedShardList);\n+\n+ if (docIdsToLoad.asList().isEmpty()) {\n+ finishHim();\n+ return;\n+ }\n+\n+ final ScoreDoc[] lastEmittedDocPerShard = searchPhaseController.getLastEmittedDocPerShard(\n+ request, sortedShardList, firstResults.length()\n+ );\n+ final AtomicInteger counter = new AtomicInteger(docIdsToLoad.asList().size());\n+ for (final AtomicArray.Entry<IntArrayList> entry : docIdsToLoad.asList()) {\n+ QuerySearchResult queryResult = queryResults.get(entry.index);\n+ DiscoveryNode node = nodes.get(queryResult.shardTarget().nodeId());\n+ ShardFetchSearchRequest fetchSearchRequest = createFetchRequest(queryResult, entry, lastEmittedDocPerShard);\n+ executeFetch(entry.index, queryResult.shardTarget(), counter, fetchSearchRequest, node);\n+ }\n+ }\n+\n+ void executeFetch(final int shardIndex, final SearchShardTarget shardTarget, final AtomicInteger counter,\n+ final ShardFetchSearchRequest fetchSearchRequest, DiscoveryNode node) {\n+ searchService.sendExecuteFetch(node, fetchSearchRequest, new ActionListener<FetchSearchResult>() {\n+ @Override\n+ public void onResponse(FetchSearchResult result) {\n+ result.shardTarget(shardTarget);\n+ fetchResults.set(shardIndex, result);\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ // the search context might not be cleared on the node where the fetch was executed for example\n+ // because the action was rejected by the thread pool. in this case we need to send a dedicated\n+ // request to clear the search context. by setting docIdsToLoad to null, the context will be cleared\n+ // in TransportSearchTypeAction.releaseIrrelevantSearchContexts() after the search request is done.\n+ docIdsToLoad.set(shardIndex, null);\n+ onFetchFailure(t, fetchSearchRequest, shardIndex, shardTarget, counter);\n+ }\n+ });\n+ }\n+\n+ void onFetchFailure(Throwable t, ShardFetchSearchRequest fetchSearchRequest, int shardIndex,\n+ SearchShardTarget shardTarget, AtomicInteger counter) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"[{}] Failed to execute fetch phase\", t, fetchSearchRequest.id());\n+ }\n+ this.addShardFailure(shardIndex, shardTarget, t);\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+\n+ private void finishHim() {\n+ threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable<SearchResponse>(listener) {\n+ @Override\n+ public void doRun() throws IOException {\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryResults,\n+ fetchResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(),\n+ buildTookInMillis(), buildShardFailures()));\n+ releaseIrrelevantSearchContexts(queryResults, docIdsToLoad);\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ try {\n+ ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"merge\", \"\", t, buildShardFailures());\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"failed to reduce search\", failure);\n+ }\n+ super.onFailure(failure);\n+ } finally {\n+ releaseIrrelevantSearchContexts(queryResults, docIdsToLoad);\n+ }\n+ }\n+ });\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java", "status": "added" }, { "diff": "@@ -0,0 +1,84 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.ActionRunnable;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.search.action.SearchServiceTransportAction;\n+import org.elasticsearch.search.controller.SearchPhaseController;\n+import org.elasticsearch.search.fetch.QueryFetchSearchResult;\n+import org.elasticsearch.search.internal.InternalSearchResponse;\n+import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n+import org.elasticsearch.threadpool.ThreadPool;\n+\n+import java.io.IOException;\n+\n+class SearchQueryAndFetchAsyncAction extends AbstractSearchAsyncAction<QueryFetchSearchResult> {\n+\n+ SearchQueryAndFetchAsyncAction(ESLogger logger, SearchServiceTransportAction searchService,\n+ ClusterService clusterService, IndexNameExpressionResolver indexNameExpressionResolver,\n+ SearchPhaseController searchPhaseController, ThreadPool threadPool,\n+ SearchRequest request, ActionListener<SearchResponse> listener) {\n+ super(logger, searchService, clusterService, indexNameExpressionResolver, searchPhaseController, threadPool, request, listener);\n+ }\n+\n+ @Override\n+ protected String firstPhaseName() {\n+ return \"query_fetch\";\n+ }\n+\n+ @Override\n+ protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request,\n+ ActionListener<QueryFetchSearchResult> listener) {\n+ searchService.sendExecuteFetch(node, request, listener);\n+ }\n+\n+ @Override\n+ protected void moveToSecondPhase() throws Exception {\n+ threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable<SearchResponse>(listener) {\n+ @Override\n+ public void doRun() throws IOException {\n+ boolean useScroll = request.scroll() != null;\n+ sortedShardList = searchPhaseController.sortDocs(useScroll, firstResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults,\n+ firstResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(),\n+ buildTookInMillis(), buildShardFailures()));\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"merge\", \"\", t, buildShardFailures());\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"failed to reduce search\", failure);\n+ }\n+ super.onFailure(failure);\n+ }\n+ });\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchQueryAndFetchAsyncAction.java", "status": "added" }, { "diff": "@@ -0,0 +1,157 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+import com.carrotsearch.hppc.IntArrayList;\n+import org.apache.lucene.search.ScoreDoc;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.ActionRunnable;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.search.SearchShardTarget;\n+import org.elasticsearch.search.action.SearchServiceTransportAction;\n+import org.elasticsearch.search.controller.SearchPhaseController;\n+import org.elasticsearch.search.fetch.FetchSearchResult;\n+import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n+import org.elasticsearch.search.internal.InternalSearchResponse;\n+import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n+import org.elasticsearch.search.query.QuerySearchResultProvider;\n+import org.elasticsearch.threadpool.ThreadPool;\n+\n+import java.io.IOException;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction<QuerySearchResultProvider> {\n+\n+ final AtomicArray<FetchSearchResult> fetchResults;\n+ final AtomicArray<IntArrayList> docIdsToLoad;\n+\n+ SearchQueryThenFetchAsyncAction(ESLogger logger, SearchServiceTransportAction searchService,\n+ ClusterService clusterService, IndexNameExpressionResolver indexNameExpressionResolver,\n+ SearchPhaseController searchPhaseController, ThreadPool threadPool,\n+ SearchRequest request, ActionListener<SearchResponse> listener) {\n+ super(logger, searchService, clusterService, indexNameExpressionResolver, searchPhaseController, threadPool, request, listener);\n+ fetchResults = new AtomicArray<>(firstResults.length());\n+ docIdsToLoad = new AtomicArray<>(firstResults.length());\n+ }\n+\n+ @Override\n+ protected String firstPhaseName() {\n+ return \"query\";\n+ }\n+\n+ @Override\n+ protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request,\n+ ActionListener<QuerySearchResultProvider> listener) {\n+ searchService.sendExecuteQuery(node, request, listener);\n+ }\n+\n+ @Override\n+ protected void moveToSecondPhase() throws Exception {\n+ boolean useScroll = request.scroll() != null;\n+ sortedShardList = searchPhaseController.sortDocs(useScroll, firstResults);\n+ searchPhaseController.fillDocIdsToLoad(docIdsToLoad, sortedShardList);\n+\n+ if (docIdsToLoad.asList().isEmpty()) {\n+ finishHim();\n+ return;\n+ }\n+\n+ final ScoreDoc[] lastEmittedDocPerShard = searchPhaseController.getLastEmittedDocPerShard(\n+ request, sortedShardList, firstResults.length()\n+ );\n+ final AtomicInteger counter = new AtomicInteger(docIdsToLoad.asList().size());\n+ for (AtomicArray.Entry<IntArrayList> entry : docIdsToLoad.asList()) {\n+ QuerySearchResultProvider queryResult = firstResults.get(entry.index);\n+ DiscoveryNode node = nodes.get(queryResult.shardTarget().nodeId());\n+ ShardFetchSearchRequest fetchSearchRequest = createFetchRequest(queryResult.queryResult(), entry, lastEmittedDocPerShard);\n+ executeFetch(entry.index, queryResult.shardTarget(), counter, fetchSearchRequest, node);\n+ }\n+ }\n+\n+ void executeFetch(final int shardIndex, final SearchShardTarget shardTarget, final AtomicInteger counter,\n+ final ShardFetchSearchRequest fetchSearchRequest, DiscoveryNode node) {\n+ searchService.sendExecuteFetch(node, fetchSearchRequest, new ActionListener<FetchSearchResult>() {\n+ @Override\n+ public void onResponse(FetchSearchResult result) {\n+ result.shardTarget(shardTarget);\n+ fetchResults.set(shardIndex, result);\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ // the search context might not be cleared on the node where the fetch was executed for example\n+ // because the action was rejected by the thread pool. in this case we need to send a dedicated\n+ // request to clear the search context. by setting docIdsToLoad to null, the context will be cleared\n+ // in TransportSearchTypeAction.releaseIrrelevantSearchContexts() after the search request is done.\n+ docIdsToLoad.set(shardIndex, null);\n+ onFetchFailure(t, fetchSearchRequest, shardIndex, shardTarget, counter);\n+ }\n+ });\n+ }\n+\n+ void onFetchFailure(Throwable t, ShardFetchSearchRequest fetchSearchRequest, int shardIndex, SearchShardTarget shardTarget,\n+ AtomicInteger counter) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"[{}] Failed to execute fetch phase\", t, fetchSearchRequest.id());\n+ }\n+ this.addShardFailure(shardIndex, shardTarget, t);\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+\n+ private void finishHim() {\n+ threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable<SearchResponse>(listener) {\n+ @Override\n+ public void doRun() throws IOException {\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults,\n+ fetchResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps,\n+ successfulOps.get(), buildTookInMillis(), buildShardFailures()));\n+ releaseIrrelevantSearchContexts(firstResults, docIdsToLoad);\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ try {\n+ ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"fetch\", \"\", t, buildShardFailures());\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"failed to reduce search\", failure);\n+ }\n+ super.onFailure(failure);\n+ } finally {\n+ releaseIrrelevantSearchContexts(firstResults, docIdsToLoad);\n+ }\n+ }\n+ });\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java", "status": "added" }, { "diff": "@@ -0,0 +1,181 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+import org.apache.lucene.search.ScoreDoc;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.search.action.SearchServiceTransportAction;\n+import org.elasticsearch.search.controller.SearchPhaseController;\n+import org.elasticsearch.search.fetch.QueryFetchSearchResult;\n+import org.elasticsearch.search.fetch.ScrollQueryFetchSearchResult;\n+import org.elasticsearch.search.internal.InternalScrollSearchRequest;\n+import org.elasticsearch.search.internal.InternalSearchResponse;\n+\n+import java.util.List;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest;\n+\n+class SearchScrollQueryAndFetchAsyncAction extends AbstractAsyncAction {\n+\n+ private final ESLogger logger;\n+ private final SearchPhaseController searchPhaseController;\n+ private final SearchServiceTransportAction searchService;\n+ private final SearchScrollRequest request;\n+ private final ActionListener<SearchResponse> listener;\n+ private final ParsedScrollId scrollId;\n+ private final DiscoveryNodes nodes;\n+ private volatile AtomicArray<ShardSearchFailure> shardFailures;\n+ private final AtomicArray<QueryFetchSearchResult> queryFetchResults;\n+ private final AtomicInteger successfulOps;\n+ private final AtomicInteger counter;\n+\n+ SearchScrollQueryAndFetchAsyncAction(ESLogger logger, ClusterService clusterService,\n+ SearchServiceTransportAction searchService, SearchPhaseController searchPhaseController,\n+ SearchScrollRequest request, ParsedScrollId scrollId, ActionListener<SearchResponse> listener) {\n+ this.logger = logger;\n+ this.searchPhaseController = searchPhaseController;\n+ this.searchService = searchService;\n+ this.request = request;\n+ this.listener = listener;\n+ this.scrollId = scrollId;\n+ this.nodes = clusterService.state().nodes();\n+ this.successfulOps = new AtomicInteger(scrollId.getContext().length);\n+ this.counter = new AtomicInteger(scrollId.getContext().length);\n+\n+ this.queryFetchResults = new AtomicArray<>(scrollId.getContext().length);\n+ }\n+\n+ protected final ShardSearchFailure[] buildShardFailures() {\n+ if (shardFailures == null) {\n+ return ShardSearchFailure.EMPTY_ARRAY;\n+ }\n+ List<AtomicArray.Entry<ShardSearchFailure>> entries = shardFailures.asList();\n+ ShardSearchFailure[] failures = new ShardSearchFailure[entries.size()];\n+ for (int i = 0; i < failures.length; i++) {\n+ failures[i] = entries.get(i).value;\n+ }\n+ return failures;\n+ }\n+\n+ // we do our best to return the shard failures, but its ok if its not fully concurrently safe\n+ // we simply try and return as much as possible\n+ protected final void addShardFailure(final int shardIndex, ShardSearchFailure failure) {\n+ if (shardFailures == null) {\n+ shardFailures = new AtomicArray<>(scrollId.getContext().length);\n+ }\n+ shardFailures.set(shardIndex, failure);\n+ }\n+\n+ public void start() {\n+ if (scrollId.getContext().length == 0) {\n+ listener.onFailure(new SearchPhaseExecutionException(\"query\", \"no nodes to search on\", ShardSearchFailure.EMPTY_ARRAY));\n+ return;\n+ }\n+\n+ ScrollIdForNode[] context = scrollId.getContext();\n+ for (int i = 0; i < context.length; i++) {\n+ ScrollIdForNode target = context[i];\n+ DiscoveryNode node = nodes.get(target.getNode());\n+ if (node != null) {\n+ executePhase(i, node, target.getScrollId());\n+ } else {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"Node [\" + target.getNode() + \"] not available for scroll request [\" + scrollId.getSource() + \"]\");\n+ }\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+ }\n+\n+ for (ScrollIdForNode target : scrollId.getContext()) {\n+ DiscoveryNode node = nodes.get(target.getNode());\n+ if (node == null) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"Node [\" + target.getNode() + \"] not available for scroll request [\" + scrollId.getSource() + \"]\");\n+ }\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+ }\n+ }\n+\n+ void executePhase(final int shardIndex, DiscoveryNode node, final long searchId) {\n+ InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(searchId, request);\n+ searchService.sendExecuteFetch(node, internalRequest, new ActionListener<ScrollQueryFetchSearchResult>() {\n+ @Override\n+ public void onResponse(ScrollQueryFetchSearchResult result) {\n+ queryFetchResults.set(shardIndex, result.result());\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ onPhaseFailure(t, searchId, shardIndex);\n+ }\n+ });\n+ }\n+\n+ private void onPhaseFailure(Throwable t, long searchId, int shardIndex) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"[{}] Failed to execute query phase\", t, searchId);\n+ }\n+ addShardFailure(shardIndex, new ShardSearchFailure(t));\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ if (successfulOps.get() == 0) {\n+ listener.onFailure(new SearchPhaseExecutionException(\"query_fetch\", \"all shards failed\", t, buildShardFailures()));\n+ } else {\n+ finishHim();\n+ }\n+ }\n+ }\n+\n+ private void finishHim() {\n+ try {\n+ innerFinishHim();\n+ } catch (Throwable e) {\n+ listener.onFailure(new ReduceSearchPhaseException(\"fetch\", \"\", e, buildShardFailures()));\n+ }\n+ }\n+\n+ private void innerFinishHim() throws Exception {\n+ ScoreDoc[] sortedShardList = searchPhaseController.sortDocs(true, queryFetchResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryFetchResults,\n+ queryFetchResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = request.scrollId();\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(),\n+ buildTookInMillis(), buildShardFailures()));\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java", "status": "added" }, { "diff": "@@ -0,0 +1,226 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+import com.carrotsearch.hppc.IntArrayList;\n+import org.apache.lucene.search.ScoreDoc;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.search.action.SearchServiceTransportAction;\n+import org.elasticsearch.search.controller.SearchPhaseController;\n+import org.elasticsearch.search.fetch.FetchSearchResult;\n+import org.elasticsearch.search.fetch.ShardFetchRequest;\n+import org.elasticsearch.search.internal.InternalScrollSearchRequest;\n+import org.elasticsearch.search.internal.InternalSearchResponse;\n+import org.elasticsearch.search.query.QuerySearchResult;\n+import org.elasticsearch.search.query.ScrollQuerySearchResult;\n+\n+import java.util.List;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest;\n+\n+class SearchScrollQueryThenFetchAsyncAction extends AbstractAsyncAction {\n+\n+ private final ESLogger logger;\n+ private final SearchServiceTransportAction searchService;\n+ private final SearchPhaseController searchPhaseController;\n+ private final SearchScrollRequest request;\n+ private final ActionListener<SearchResponse> listener;\n+ private final ParsedScrollId scrollId;\n+ private final DiscoveryNodes nodes;\n+ private volatile AtomicArray<ShardSearchFailure> shardFailures;\n+ final AtomicArray<QuerySearchResult> queryResults;\n+ final AtomicArray<FetchSearchResult> fetchResults;\n+ private volatile ScoreDoc[] sortedShardList;\n+ private final AtomicInteger successfulOps;\n+\n+ SearchScrollQueryThenFetchAsyncAction(ESLogger logger, ClusterService clusterService,\n+ SearchServiceTransportAction searchService, SearchPhaseController searchPhaseController,\n+ SearchScrollRequest request, ParsedScrollId scrollId, ActionListener<SearchResponse> listener) {\n+ this.logger = logger;\n+ this.searchService = searchService;\n+ this.searchPhaseController = searchPhaseController;\n+ this.request = request;\n+ this.listener = listener;\n+ this.scrollId = scrollId;\n+ this.nodes = clusterService.state().nodes();\n+ this.successfulOps = new AtomicInteger(scrollId.getContext().length);\n+ this.queryResults = new AtomicArray<>(scrollId.getContext().length);\n+ this.fetchResults = new AtomicArray<>(scrollId.getContext().length);\n+ }\n+\n+ protected final ShardSearchFailure[] buildShardFailures() {\n+ if (shardFailures == null) {\n+ return ShardSearchFailure.EMPTY_ARRAY;\n+ }\n+ List<AtomicArray.Entry<ShardSearchFailure>> entries = shardFailures.asList();\n+ ShardSearchFailure[] failures = new ShardSearchFailure[entries.size()];\n+ for (int i = 0; i < failures.length; i++) {\n+ failures[i] = entries.get(i).value;\n+ }\n+ return failures;\n+ }\n+\n+ // we do our best to return the shard failures, but its ok if its not fully concurrently safe\n+ // we simply try and return as much as possible\n+ protected final void addShardFailure(final int shardIndex, ShardSearchFailure failure) {\n+ if (shardFailures == null) {\n+ shardFailures = new AtomicArray<>(scrollId.getContext().length);\n+ }\n+ shardFailures.set(shardIndex, failure);\n+ }\n+\n+ public void start() {\n+ if (scrollId.getContext().length == 0) {\n+ listener.onFailure(new SearchPhaseExecutionException(\"query\", \"no nodes to search on\", ShardSearchFailure.EMPTY_ARRAY));\n+ return;\n+ }\n+ final AtomicInteger counter = new AtomicInteger(scrollId.getContext().length);\n+\n+ ScrollIdForNode[] context = scrollId.getContext();\n+ for (int i = 0; i < context.length; i++) {\n+ ScrollIdForNode target = context[i];\n+ DiscoveryNode node = nodes.get(target.getNode());\n+ if (node != null) {\n+ executeQueryPhase(i, counter, node, target.getScrollId());\n+ } else {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"Node [\" + target.getNode() + \"] not available for scroll request [\" + scrollId.getSource() + \"]\");\n+ }\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ try {\n+ executeFetchPhase();\n+ } catch (Throwable e) {\n+ listener.onFailure(new SearchPhaseExecutionException(\"query\", \"Fetch failed\", e, ShardSearchFailure.EMPTY_ARRAY));\n+ return;\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n+ private void executeQueryPhase(final int shardIndex, final AtomicInteger counter, DiscoveryNode node, final long searchId) {\n+ InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(searchId, request);\n+ searchService.sendExecuteQuery(node, internalRequest, new ActionListener<ScrollQuerySearchResult>() {\n+ @Override\n+ public void onResponse(ScrollQuerySearchResult result) {\n+ queryResults.set(shardIndex, result.queryResult());\n+ if (counter.decrementAndGet() == 0) {\n+ try {\n+ executeFetchPhase();\n+ } catch (Throwable e) {\n+ onFailure(e);\n+ }\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ onQueryPhaseFailure(shardIndex, counter, searchId, t);\n+ }\n+ });\n+ }\n+\n+ void onQueryPhaseFailure(final int shardIndex, final AtomicInteger counter, final long searchId, Throwable t) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"[{}] Failed to execute query phase\", t, searchId);\n+ }\n+ addShardFailure(shardIndex, new ShardSearchFailure(t));\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ if (successfulOps.get() == 0) {\n+ listener.onFailure(new SearchPhaseExecutionException(\"query\", \"all shards failed\", t, buildShardFailures()));\n+ } else {\n+ try {\n+ executeFetchPhase();\n+ } catch (Throwable e) {\n+ listener.onFailure(new SearchPhaseExecutionException(\"query\", \"Fetch failed\", e, ShardSearchFailure.EMPTY_ARRAY));\n+ }\n+ }\n+ }\n+ }\n+\n+ private void executeFetchPhase() throws Exception {\n+ sortedShardList = searchPhaseController.sortDocs(true, queryResults);\n+ AtomicArray<IntArrayList> docIdsToLoad = new AtomicArray<>(queryResults.length());\n+ searchPhaseController.fillDocIdsToLoad(docIdsToLoad, sortedShardList);\n+\n+ if (docIdsToLoad.asList().isEmpty()) {\n+ finishHim();\n+ return;\n+ }\n+\n+\n+ final ScoreDoc[] lastEmittedDocPerShard = searchPhaseController.getLastEmittedDocPerShard(sortedShardList, queryResults.length());\n+ final AtomicInteger counter = new AtomicInteger(docIdsToLoad.asList().size());\n+ for (final AtomicArray.Entry<IntArrayList> entry : docIdsToLoad.asList()) {\n+ IntArrayList docIds = entry.value;\n+ final QuerySearchResult querySearchResult = queryResults.get(entry.index);\n+ ScoreDoc lastEmittedDoc = lastEmittedDocPerShard[entry.index];\n+ ShardFetchRequest shardFetchRequest = new ShardFetchRequest(querySearchResult.id(), docIds, lastEmittedDoc);\n+ DiscoveryNode node = nodes.get(querySearchResult.shardTarget().nodeId());\n+ searchService.sendExecuteFetchScroll(node, shardFetchRequest, new ActionListener<FetchSearchResult>() {\n+ @Override\n+ public void onResponse(FetchSearchResult result) {\n+ result.shardTarget(querySearchResult.shardTarget());\n+ fetchResults.set(entry.index, result);\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"Failed to execute fetch phase\", t);\n+ }\n+ successfulOps.decrementAndGet();\n+ if (counter.decrementAndGet() == 0) {\n+ finishHim();\n+ }\n+ }\n+ });\n+ }\n+ }\n+\n+ private void finishHim() {\n+ try {\n+ innerFinishHim();\n+ } catch (Throwable e) {\n+ listener.onFailure(new ReduceSearchPhaseException(\"fetch\", \"\", e, buildShardFailures()));\n+ }\n+ }\n+\n+ private void innerFinishHim() {\n+ InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryResults, fetchResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = request.scrollId();\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(),\n+ buildTookInMillis(), buildShardFailures()));\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java", "status": "added" }, { "diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.action.search;\n \n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.search.type.ScrollIdForNode;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.HandledTransportAction;\n import org.elasticsearch.cluster.ClusterService;\n@@ -41,7 +40,7 @@\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.concurrent.atomic.AtomicReference;\n \n-import static org.elasticsearch.action.search.type.TransportSearchHelper.parseScrollId;\n+import static org.elasticsearch.action.search.TransportSearchHelper.parseScrollId;\n \n /**\n */", "filename": "core/src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java", "status": "modified" }, { "diff": "@@ -20,10 +20,6 @@\n package org.elasticsearch.action.search;\n \n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.search.type.TransportSearchDfsQueryAndFetchAction;\n-import org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction;\n-import org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction;\n-import org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.HandledTransportAction;\n import org.elasticsearch.cluster.ClusterService;\n@@ -33,13 +29,14 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.indices.IndexClosedException;\n+import org.elasticsearch.search.action.SearchServiceTransportAction;\n+import org.elasticsearch.search.controller.SearchPhaseController;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n import java.util.Map;\n import java.util.Set;\n \n-import static org.elasticsearch.action.search.SearchType.DFS_QUERY_THEN_FETCH;\n import static org.elasticsearch.action.search.SearchType.QUERY_AND_FETCH;\n \n /**\n@@ -48,25 +45,18 @@\n public class TransportSearchAction extends HandledTransportAction<SearchRequest, SearchResponse> {\n \n private final ClusterService clusterService;\n- private final TransportSearchDfsQueryThenFetchAction dfsQueryThenFetchAction;\n- private final TransportSearchQueryThenFetchAction queryThenFetchAction;\n- private final TransportSearchDfsQueryAndFetchAction dfsQueryAndFetchAction;\n- private final TransportSearchQueryAndFetchAction queryAndFetchAction;\n+ private final SearchServiceTransportAction searchService;\n+ private final SearchPhaseController searchPhaseController;\n \n @Inject\n- public TransportSearchAction(Settings settings, ThreadPool threadPool,\n- TransportService transportService, ClusterService clusterService,\n- TransportSearchDfsQueryThenFetchAction dfsQueryThenFetchAction,\n- TransportSearchQueryThenFetchAction queryThenFetchAction,\n- TransportSearchDfsQueryAndFetchAction dfsQueryAndFetchAction,\n- TransportSearchQueryAndFetchAction queryAndFetchAction,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n+ public TransportSearchAction(Settings settings, ThreadPool threadPool, SearchPhaseController searchPhaseController,\n+ TransportService transportService, SearchServiceTransportAction searchService,\n+ ClusterService clusterService, ActionFilters actionFilters, IndexNameExpressionResolver\n+ indexNameExpressionResolver) {\n super(settings, SearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, SearchRequest::new);\n+ this.searchPhaseController = searchPhaseController;\n+ this.searchService = searchService;\n this.clusterService = clusterService;\n- this.dfsQueryThenFetchAction = dfsQueryThenFetchAction;\n- this.queryThenFetchAction = queryThenFetchAction;\n- this.dfsQueryAndFetchAction = dfsQueryAndFetchAction;\n- this.queryAndFetchAction = queryAndFetchAction;\n }\n \n @Override\n@@ -75,7 +65,8 @@ protected void doExecute(SearchRequest searchRequest, ActionListener<SearchRespo\n try {\n ClusterState clusterState = clusterService.state();\n String[] concreteIndices = indexNameExpressionResolver.concreteIndices(clusterState, searchRequest);\n- Map<String, Set<String>> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, searchRequest.routing(), searchRequest.indices());\n+ Map<String, Set<String>> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState,\n+ searchRequest.routing(), searchRequest.indices());\n int shardCount = clusterService.operationRouting().searchShardsCount(clusterState, concreteIndices, routingMap);\n if (shardCount == 1) {\n // if we only have one group, then we always want Q_A_F, no need for DFS, and no need to do THEN since we hit one shard\n@@ -86,16 +77,28 @@ protected void doExecute(SearchRequest searchRequest, ActionListener<SearchRespo\n } catch (Exception e) {\n logger.debug(\"failed to optimize search type, continue as normal\", e);\n }\n- if (searchRequest.searchType() == DFS_QUERY_THEN_FETCH) {\n- dfsQueryThenFetchAction.execute(searchRequest, listener);\n- } else if (searchRequest.searchType() == SearchType.QUERY_THEN_FETCH) {\n- queryThenFetchAction.execute(searchRequest, listener);\n- } else if (searchRequest.searchType() == SearchType.DFS_QUERY_AND_FETCH) {\n- dfsQueryAndFetchAction.execute(searchRequest, listener);\n- } else if (searchRequest.searchType() == SearchType.QUERY_AND_FETCH) {\n- queryAndFetchAction.execute(searchRequest, listener);\n- } else {\n- throw new IllegalStateException(\"Unknown search type: [\" + searchRequest.searchType() + \"]\");\n+\n+ AbstractSearchAsyncAction searchAsyncAction;\n+ switch(searchRequest.searchType()) {\n+ case DFS_QUERY_THEN_FETCH:\n+ searchAsyncAction = new SearchDfsQueryThenFetchAsyncAction(logger, searchService, clusterService,\n+ indexNameExpressionResolver, searchPhaseController, threadPool, searchRequest, listener);\n+ break;\n+ case QUERY_THEN_FETCH:\n+ searchAsyncAction = new SearchQueryThenFetchAsyncAction(logger, searchService, clusterService,\n+ indexNameExpressionResolver, searchPhaseController, threadPool, searchRequest, listener);\n+ break;\n+ case DFS_QUERY_AND_FETCH:\n+ searchAsyncAction = new SearchDfsQueryAndFetchAsyncAction(logger, searchService, clusterService,\n+ indexNameExpressionResolver, searchPhaseController, threadPool, searchRequest, listener);\n+ break;\n+ case QUERY_AND_FETCH:\n+ searchAsyncAction = new SearchQueryAndFetchAsyncAction(logger, searchService, clusterService,\n+ indexNameExpressionResolver, searchPhaseController, threadPool, searchRequest, listener);\n+ break;\n+ default:\n+ throw new IllegalStateException(\"Unknown search type: [\" + searchRequest.searchType() + \"]\");\n }\n+ searchAsyncAction.start();\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java", "status": "modified" }, { "diff": "@@ -20,51 +20,60 @@\n package org.elasticsearch.action.search;\n \n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.search.type.ParsedScrollId;\n-import org.elasticsearch.action.search.type.TransportSearchScrollQueryAndFetchAction;\n-import org.elasticsearch.action.search.type.TransportSearchScrollQueryThenFetchAction;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.HandledTransportAction;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.search.action.SearchServiceTransportAction;\n+import org.elasticsearch.search.controller.SearchPhaseController;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n-import static org.elasticsearch.action.search.type.ParsedScrollId.QUERY_AND_FETCH_TYPE;\n-import static org.elasticsearch.action.search.type.ParsedScrollId.QUERY_THEN_FETCH_TYPE;\n-import static org.elasticsearch.action.search.type.TransportSearchHelper.parseScrollId;\n+import static org.elasticsearch.action.search.ParsedScrollId.QUERY_AND_FETCH_TYPE;\n+import static org.elasticsearch.action.search.ParsedScrollId.QUERY_THEN_FETCH_TYPE;\n+import static org.elasticsearch.action.search.TransportSearchHelper.parseScrollId;\n \n /**\n *\n */\n public class TransportSearchScrollAction extends HandledTransportAction<SearchScrollRequest, SearchResponse> {\n \n- private final TransportSearchScrollQueryThenFetchAction queryThenFetchAction;\n- private final TransportSearchScrollQueryAndFetchAction queryAndFetchAction;\n+ private final ClusterService clusterService;\n+ private final SearchServiceTransportAction searchService;\n+ private final SearchPhaseController searchPhaseController;\n \n @Inject\n public TransportSearchScrollAction(Settings settings, ThreadPool threadPool, TransportService transportService,\n- TransportSearchScrollQueryThenFetchAction queryThenFetchAction,\n- TransportSearchScrollQueryAndFetchAction queryAndFetchAction,\n- ActionFilters actionFilters,\n- IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, SearchScrollAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, SearchScrollRequest::new);\n- this.queryThenFetchAction = queryThenFetchAction;\n- this.queryAndFetchAction = queryAndFetchAction;\n+ ClusterService clusterService, SearchServiceTransportAction searchService,\n+ SearchPhaseController searchPhaseController,\n+ ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n+ super(settings, SearchScrollAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver,\n+ SearchScrollRequest::new);\n+ this.clusterService = clusterService;\n+ this.searchService = searchService;\n+ this.searchPhaseController = searchPhaseController;\n }\n \n @Override\n protected void doExecute(SearchScrollRequest request, ActionListener<SearchResponse> listener) {\n try {\n ParsedScrollId scrollId = parseScrollId(request.scrollId());\n- if (scrollId.getType().equals(QUERY_THEN_FETCH_TYPE)) {\n- queryThenFetchAction.execute(request, scrollId, listener);\n- } else if (scrollId.getType().equals(QUERY_AND_FETCH_TYPE)) {\n- queryAndFetchAction.execute(request, scrollId, listener);\n- } else {\n- throw new IllegalArgumentException(\"Scroll id type [\" + scrollId.getType() + \"] unrecognized\");\n+ AbstractAsyncAction action;\n+ switch (scrollId.getType()) {\n+ case QUERY_THEN_FETCH_TYPE:\n+ action = new SearchScrollQueryThenFetchAsyncAction(logger, clusterService, searchService,\n+ searchPhaseController, request, scrollId, listener);\n+ break;\n+ case QUERY_AND_FETCH_TYPE:\n+ action = new SearchScrollQueryAndFetchAsyncAction(logger, clusterService, searchService,\n+ searchPhaseController, request, scrollId, listener);\n+ break;\n+ default:\n+ throw new IllegalArgumentException(\"Scroll id type [\" + scrollId.getType() + \"] unrecognized\");\n }\n+ action.start();\n } catch (Throwable e) {\n listener.onFailure(e);\n }", "filename": "core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java", "status": "modified" } ] }
{ "body": "When using Elasticsearch 2.2.0 and the cloud-azure plugin. When you use the following settings:\n\nhttps://www.elastic.co/guide/en/elasticsearch/plugins/2.2/cloud-azure-repository.html\n\n``` yaml\ncloud:\n azure:\n storage:\n my_account:\n account: your_azure_storage_account\n key: your_azure_storage_key\n```\n\nWhen you try to execute the following API you get Unknown [repository] type [azure]:\n\n``` sh\n[msimos@msi-gs60 elasticsearch-2.2.0]$ curl -XPUT http://localhost:9200/_snapshot/mybackup?pretty -d '{\n\"type\": \"azure\"\n}'\n```\n\n``` js\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"repository_exception\",\n \"reason\" : \"[mybackup] failed to create repository\"\n } ],\n \"type\" : \"repository_exception\",\n \"reason\" : \"[mybackup] failed to create repository\",\n \"caused_by\" : {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Unknown [repository] type [azure]\"\n }\n },\n \"status\" : 500\n}\n```\n\n``` sh\n[msimos@msi-gs60 elasticsearch-2.2.0]$ curl -XGE http://localhost:9200/_cat/plugins?v\nname component version type url \nnode01 cloud-azure 2.2.0 j \nnode01 license 2.2.0 j \nnode01 shield 2.2.0 j \n```\n\nIn the elasticsearch log file you see this error:\n\n```\n[2016-02-19 10:54:47,357][DEBUG][cloud.azure ] [node01] starting azure services\n[2016-02-19 10:54:47,357][DEBUG][cloud.azure ] [node01] azure repository is not set using [repositories.azure.account] and [cloud.azure.storage.key] properties\n```\n", "comments": [ { "body": "Closed with 7ffd6aa (2.3.0) and 571f425 (2.2.1)\n", "created_at": "2016-02-25T12:49:59Z" } ], "number": 16734, "title": "Unknown [repository] type [azure] error with Elasticsearch 2.2 & cloud-azure plugin" }
{ "body": "This regression has been introduced in 2.2.0 by #13779 where we removed support for `cloud.azure.storage.account` and replaced by `cloud.azure.storage.my_account.account`.\nBut Azure plugin tries to detect when it starts if all needed settings to use an `azure` repository have been set. If not the case, the plugin does not expose `azure` as an available repository.\n\nSadly, this check has been badly updated in 2.2.0 so it can never find the expected settings to start correctly.\n\nThis gives the following effect:\n\n``` yaml\ncloud:\n azure:\n storage:\n my_account:\n account: your_azure_storage_account\n key: your_azure_storage_key\n```\n\nWhen you try to execute the following API you get Unknown [repository] type [azure]:\n\n``` sh\n[msimos@msi-gs60 elasticsearch-2.2.0]$ curl -XPUT http://localhost:9200/_snapshot/mybackup?pretty -d '{\n\"type\": \"azure\"\n}'\n```\n\n``` js\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"repository_exception\",\n \"reason\" : \"[mybackup] failed to create repository\"\n } ],\n \"type\" : \"repository_exception\",\n \"reason\" : \"[mybackup] failed to create repository\",\n \"caused_by\" : {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Unknown [repository] type [azure]\"\n }\n },\n \"status\" : 500\n}\n```\n\n``` sh\n[msimos@msi-gs60 elasticsearch-2.2.0]$ curl -XGE http://localhost:9200/_cat/plugins?v\nname component version type url\nnode01 cloud-azure 2.2.0 j\nnode01 license 2.2.0 j\nnode01 shield 2.2.0 j\n```\n\nIn the elasticsearch log file you see this error:\n\n```\n[2016-02-19 10:54:47,357][DEBUG][cloud.azure ] [node01] starting azure services\n[2016-02-19 10:54:47,357][DEBUG][cloud.azure ] [node01] azure repository is not set using [repositories.azure.account] and [cloud.azure.storage.key] properties\n```\n\nCloses #16734\n\nNote that this does not happen in master branch because we split the azure plugin and we always define `azure` as a repository when the `repository-azure` plugin starts.\n", "number": 16747, "review_comments": [ { "body": "It looks like BASE_PATH can be removed as well. \n", "created_at": "2016-02-24T21:58:08Z" }, { "body": "Agreed. I'm still wondering if we should actually support a global `base_path` setting which was the original intention (so I think this is a bug).\nOr if we should only support `base_path` for individual repository.\n\nWDYT?\n", "created_at": "2016-02-24T22:19:07Z" }, { "body": "I think we should support it for individual repositories. Not really sure what would be a use case of supporting it globally.\n", "created_at": "2016-02-24T22:25:34Z" }, { "body": "It was just for integration tests actually. That way I was able to define this globally and run IT in their own random path name.\nLet's remove it than. Thanks!\n", "created_at": "2016-02-24T22:26:59Z" } ], "title": "Fix Unknown [repository] type [azure] error with 2.2.0" }
{ "commits": [ { "message": "Fix Unknown [repository] type [azure] error with 2.2.0\n\nThis regression has been introduced in 2.2.0 by #13779 where we removed support for `cloud.azure.storage.account` and replaced by `cloud.azure.storage.my_account.account`.\nBut Azure plugin tries to detect when it starts if all needed settings to use an `azure` repository have been set. If not the case, the plugin does not expose `azure` as an available repository.\n\nSadly, this check has been badly updated in 2.2.0 so it can never find the expected settings to start correctly.\n\nThis gives the following effect:\n\n```yaml\ncloud:\n azure:\n storage:\n my_account:\n account: your_azure_storage_account\n key: your_azure_storage_key\n```\n\nWhen you try to execute the following API you get Unknown [repository] type [azure]:\n\n```sh\n[msimos@msi-gs60 elasticsearch-2.2.0]$ curl -XPUT http://localhost:9200/_snapshot/mybackup?pretty -d '{\n\"type\": \"azure\"\n}'\n```\n\n```js\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"repository_exception\",\n \"reason\" : \"[mybackup] failed to create repository\"\n } ],\n \"type\" : \"repository_exception\",\n \"reason\" : \"[mybackup] failed to create repository\",\n \"caused_by\" : {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Unknown [repository] type [azure]\"\n }\n },\n \"status\" : 500\n}\n```\n\n```sh\n[msimos@msi-gs60 elasticsearch-2.2.0]$ curl -XGE http://localhost:9200/_cat/plugins?v\nname component version type url\nnode01 cloud-azure 2.2.0 j\nnode01 license 2.2.0 j\nnode01 shield 2.2.0 j\n```\nIn the elasticsearch log file you see this error:\n\n```\n[2016-02-19 10:54:47,357][DEBUG][cloud.azure ] [node01] starting azure services\n[2016-02-19 10:54:47,357][DEBUG][cloud.azure ] [node01] azure repository is not set using [repositories.azure.account] and [cloud.azure.storage.key] properties\n```\n\nCloses #16734" } ], "files": [ { "diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.cloud.azure.storage.AzureStorageService;\n import org.elasticsearch.cloud.azure.storage.AzureStorageService.Storage;\n import org.elasticsearch.cloud.azure.storage.AzureStorageServiceImpl;\n+import org.elasticsearch.cloud.azure.storage.AzureStorageSettings;\n import org.elasticsearch.cloud.azure.storage.AzureStorageSettingsFilter;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.AbstractModule;\n@@ -145,11 +146,10 @@ public static boolean isSnapshotReady(Settings settings, ESLogger logger) {\n return false;\n }\n \n- if (isPropertyMissing(settings, Storage.ACCOUNT) ||\n- isPropertyMissing(settings, Storage.KEY)) {\n- logger.debug(\"azure repository is not set using [{}] and [{}] properties\",\n- Storage.ACCOUNT,\n- Storage.KEY);\n+ if (AzureStorageSettings.parse(settings).v1() == null) {\n+ logger.debug(\"azure repository is not set using [{}xxx.account] and [{}xxx.key] properties\",\n+ Storage.PREFIX,\n+ Storage.PREFIX);\n return false;\n }\n ", "filename": "plugins/cloud-azure/src/main/java/org/elasticsearch/cloud/azure/AzureModule.java", "status": "modified" }, { "diff": "@@ -44,11 +44,7 @@ final class Storage {\n \n public static final String TIMEOUT = \"cloud.azure.storage.timeout\";\n \n- public static final String ACCOUNT = \"repositories.azure.account\";\n- public static final String LOCATION_MODE = \"repositories.azure.location_mode\";\n- public static final String KEY = \"cloud.azure.storage.key\";\n public static final String CONTAINER = \"repositories.azure.container\";\n- public static final String BASE_PATH = \"repositories.azure.base_path\";\n public static final String CHUNK_SIZE = \"repositories.azure.chunk_size\";\n public static final String COMPRESS = \"repositories.azure.compress\";\n }", "filename": "plugins/cloud-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageService.java", "status": "modified" }, { "diff": "@@ -83,8 +83,8 @@ protected Settings nodeSettings(int nodeOrdinal) {\n .put(Storage.CONTAINER, \"snapshots\");\n \n // We use sometime deprecated settings in tests\n- builder.put(Storage.ACCOUNT, \"mock_azure_account\")\n- .put(Storage.KEY, \"mock_azure_key\");\n+ builder.put(Storage.ACCOUNT_DEPRECATED, \"mock_azure_account\")\n+ .put(Storage.KEY_DEPRECATED, \"mock_azure_key\");\n \n return builder.build();\n }", "filename": "plugins/cloud-azure/src/test/java/org/elasticsearch/cloud/azure/AbstractAzureRepositoryServiceTestCase.java", "status": "modified" }, { "diff": "@@ -65,8 +65,8 @@ public void testParseUniqueSettings() {\n \n public void testDeprecatedSettings() {\n Settings settings = Settings.builder()\n- .put(Storage.ACCOUNT, \"myaccount1\")\n- .put(Storage.KEY, \"mykey1\")\n+ .put(Storage.ACCOUNT_DEPRECATED, \"myaccount1\")\n+ .put(Storage.KEY_DEPRECATED, \"mykey1\")\n .build();\n \n Tuple<AzureStorageSettings, Map<String, AzureStorageSettings>> tuple = AzureStorageSettings.parse(settings);", "filename": "plugins/cloud-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSettingsParserTest.java", "status": "modified" } ] }
{ "body": "Hi,\n\nWith the `indices` cat API, I can list my indices, either _open_ or _close_.\n\n```\n→ curl http://127.0.0.1:9200/_cat/indices\n close partner_results-20150930\nyellow open partner_requests-20150929 1 1 17 0 230.3kb 230.3kb\nyellow open partner_results-20151102 5 1 4311 0 1.5mb 1.5mb\nyellow open search_requests-20150902 5 1 10 0 51.3kb 51.3kb\n```\n\nIf I filter my results, I no longer see _close_ indices : \n\n```\n→ curl http://127.0.0.1:9200/_cat/indices/\\*_results-\\*\nyellow open partner_results-20151102 5 1 4311 0 1.5mb 1.5mb\n```\n\nI didn't find the documentation about that so I don't know if it a bug or an undocumented behavior.\n\nI thought there might be an argument for `/_cat/indices` to show `all/open/close` indices, but I haven't found anything.\n", "comments": [ { "body": "I would like to start contributing by fixing this one, if this can be assigned to me!\n", "created_at": "2016-02-19T09:48:51Z" }, { "body": "> I would like to start contributing by fixing this one, if this can be assigned to me!\n\nGrumble grumble it looks like you can't assign non-\"elastic members\" to issues. I was wrong! Anyway, consider it claimed.\n", "created_at": "2016-02-19T15:12:02Z" }, { "body": "@nik9000 In such a case, I think we can remove the `adoptme` label. I just updated the issue.\n", "created_at": "2016-02-19T15:26:07Z" }, { "body": "> @nik9000 In such a case, I think we can remove the adoptme label. I just updated the issue.\n\nThanks!\n", "created_at": "2016-02-19T15:28:49Z" } ], "number": 16419, "title": "Indices cat API doesn't list closed indices when filtered" }
{ "body": "Fix for #16419\n", "number": 16731, "review_comments": [], "title": "Include closed indices as well for filter" }
{ "commits": [ { "message": "Include closed indices as well for filter" } ], "files": [ { "diff": "@@ -75,6 +75,7 @@ public void doRequest(final RestRequest request, final RestChannel channel, fina\n final String[] indices = Strings.splitStringByCommaToArray(request.param(\"index\"));\n final ClusterStateRequest clusterStateRequest = new ClusterStateRequest();\n clusterStateRequest.clear().indices(indices).metaData(true);\n+ clusterStateRequest.indicesOptions(IndicesOptions.strictExpand());\n clusterStateRequest.local(request.paramAsBoolean(\"local\", clusterStateRequest.local()));\n clusterStateRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", clusterStateRequest.masterNodeTimeout()));\n ", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java", "status": "modified" } ] }
{ "body": "In the debian init script, here:\nhttps://github.com/elastic/elasticsearch/blob/master/distribution/deb/src/main/packaging/init.d/elasticsearch#L174\n\nThe check does not work as expected and exits before the pid file is filled.\nxargs exit successfully because nothing is sent to it's stdin:\n\n```\n$ cat /dev/null | xargs kill -0 ; echo $?\n0\n```\n", "comments": [], "number": 16717, "title": "deb's init script not waiting for pid before exiting" }
{ "body": "Closes #16717\n", "number": 16718, "review_comments": [], "title": "Fix waiting for pidfile" }
{ "commits": [ { "message": "Fix waiting for pidfile\n\nCloses #16717" } ], "files": [ { "diff": "@@ -172,7 +172,7 @@ case \"$1\" in\n \t\ti=0\n \t\ttimeout=10\n \t\t# Wait for the process to be properly started before exiting\n-\t\tuntil { cat \"$PID_FILE\" | xargs kill -0; } >/dev/null 2>&1\n+\t\tuntil { kill -0 `cat \"$PID_FILE\"`; } >/dev/null 2>&1\n \t\tdo\n \t\t\tsleep 1\n \t\t\ti=$(($i + 1))", "filename": "distribution/deb/src/main/packaging/init.d/elasticsearch", "status": "modified" } ] }
{ "body": "A standard GET by ID results in a single log entry in the audit log. Search (_search) results in two identical log entries. Every time. \n- Default ES 1.6.0 installation. \n- Added latest Shield. \n- Added \"search_admin\" user. \n\nEverything works great out of the box except I get _search audits duplicated in the log. ONLY \"SearchRequest\" audits. I haven't found any other api yet which results in this strange behavior.\n\n```\n[2015-06-16 18:37:24,705] [esdev-shieldpoc01] [transport] [access_granted] origin_type=[rest], origin_address=[/10.30.24.36:55308], principal=[search_admin], action=[indices:data/read/search], indices=[test], request=[SearchRequest]\n[2015-06-16 18:37:24,705] [esdev-shieldpoc01] [transport] [access_granted] origin_type=[rest], origin_address=[/10.30.24.36:55308], principal=[search_admin], action=[indices:data/read/search], indices=[test], request=[SearchRequest]\n```\n", "comments": [ { "body": "@jaymode please could you take a look?\n", "created_at": "2015-06-18T12:58:41Z" }, { "body": "@bobbyhubbard thanks for reporting this. This behavior isn't ideal and we'll look at if/how we can improve this. What you're seeing is a side effect of how auditing is implemented and how search requests are executed. Shield audits the individual actions that are executed by elasticsearch. \n\nThe `indices:data/read/search` action name corresponds to multiple actions for search. The reason that you are seeing the message twice, is the API executes a `TransportSearchAction`, which in turn executes a action corresponding to the search type; the default type `query_then_fetch` corresponds to the `TransportSearchQueryThenFetchAction` and the execution of this action is what causes the second log message.\n", "created_at": "2015-06-19T12:53:30Z" }, { "body": "This seems to have to do with the fact that the main `TransportSearchAction` uses an inner `TransportSearchTypeAction` (there is a different impl for each search type). Last time I checked I noticed some code that gets executed twice while it shouldn't (e.g. request validation), and a side effect is also the double audit log line. Maybe this inner action shouldn't be a transport action after all? The two (outer and inner) execute calls happen on the same node all the time, seems like also the transport handler registration that happens in `TransportSearchTypeAction` has no actual effect as it's already registered by `TransportSearchAction` with same name, so only used for audit logging.\n", "created_at": "2015-06-19T14:26:12Z" }, { "body": "This bit us again today when someone else in our org setup a new log drain from shield. It reported nearly double the number of rest requests as expected. Then I remembered this issue... The workaround is simple enough...to hash each message (fingerprint in logstash) and use the hash as the message id. But this WILL bite every single Shield customer who is measuring and auditing rest calls. (How many are reporting invalid results now because they dont even know about this bug like one team here almost did?)\n", "created_at": "2015-11-18T23:20:30Z" } ], "number": 11710, "title": "Shield duplicates _search api audits" }
{ "body": "All the actions that extend TransportSearchTypeAction are subactions of the main TransportSearchAction. The main one is and should be a transport action, register request handlers, support request and response filtering etc. but the subactions shouldn't as that becomes just double work. At the moment each search request goes through validation and filters twice, one as part of the main action, and the second one as part of the subaction execution. The subactions don't need to extend TransportAction, but can be simple support classes, as they are always executed on the same node as their main action.\n\nThis commit modifies TransportSearchTypeAction to not extend TransportAction but simply AbstractComponent.\n\nCloses #11710\n", "number": 16701, "review_comments": [], "title": "Internal: TransportSearchTypeAction shouldn't extend TransportAction" }
{ "commits": [ { "message": "Internal: TransportSearchTypeAction shouldn't extend TransportAction\n\nAll the actions that extend TransportSearchTypeAction are subactions of the main TransportSearchAction. The main one is and should be a transport action, register request handlers, support request and response filtering etc. but the subactions shouldn't as that becomes just double work. At the moment each search request goes through validation and filters twice, one as part of the main action, and the second one as part of the subaction execution. The subactions don't need to extend TransportAction, but can be simple support classes, as they are always executed on the same node as their main action.\n\nThis commit modifies TransportSearchTypeAction to not extend TransportAction but simply AbstractComponent.\n\nCloses #11710" } ], "files": [ { "diff": "@@ -24,7 +24,6 @@\n import org.elasticsearch.action.search.ReduceSearchPhaseException;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -52,12 +51,12 @@ public class TransportSearchDfsQueryAndFetchAction extends TransportSearchTypeAc\n @Inject\n public TransportSearchDfsQueryAndFetchAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n SearchServiceTransportAction searchService, SearchPhaseController searchPhaseController,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, threadPool, clusterService, searchService, searchPhaseController, actionFilters, indexNameExpressionResolver);\n+ IndexNameExpressionResolver indexNameExpressionResolver) {\n+ super(settings, threadPool, clusterService, searchService, searchPhaseController, indexNameExpressionResolver);\n }\n \n @Override\n- protected void doExecute(SearchRequest searchRequest, ActionListener<SearchResponse> listener) {\n+ public void execute(SearchRequest searchRequest, ActionListener<SearchResponse> listener) {\n new AsyncAction(searchRequest, listener).start();\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchDfsQueryAndFetchAction.java", "status": "modified" }, { "diff": "@@ -27,7 +27,6 @@\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -58,12 +57,12 @@ public class TransportSearchDfsQueryThenFetchAction extends TransportSearchTypeA\n @Inject\n public TransportSearchDfsQueryThenFetchAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n SearchServiceTransportAction searchService, SearchPhaseController searchPhaseController,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, threadPool, clusterService, searchService, searchPhaseController, actionFilters, indexNameExpressionResolver);\n+ IndexNameExpressionResolver indexNameExpressionResolver) {\n+ super(settings, threadPool, clusterService, searchService, searchPhaseController, indexNameExpressionResolver);\n }\n \n @Override\n- protected void doExecute(SearchRequest searchRequest, ActionListener<SearchResponse> listener) {\n+ public void execute(SearchRequest searchRequest, ActionListener<SearchResponse> listener) {\n new AsyncAction(searchRequest, listener).start();\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchDfsQueryThenFetchAction.java", "status": "modified" }, { "diff": "@@ -24,7 +24,6 @@\n import org.elasticsearch.action.search.ReduceSearchPhaseException;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -49,12 +48,12 @@ public class TransportSearchQueryAndFetchAction extends TransportSearchTypeActio\n @Inject\n public TransportSearchQueryAndFetchAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n SearchServiceTransportAction searchService, SearchPhaseController searchPhaseController,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, threadPool, clusterService, searchService, searchPhaseController, actionFilters, indexNameExpressionResolver);\n+ IndexNameExpressionResolver indexNameExpressionResolver) {\n+ super(settings, threadPool, clusterService, searchService, searchPhaseController, indexNameExpressionResolver);\n }\n \n @Override\n- protected void doExecute(SearchRequest searchRequest, ActionListener<SearchResponse> listener) {\n+ public void execute(SearchRequest searchRequest, ActionListener<SearchResponse> listener) {\n new AsyncAction(searchRequest, listener).start();\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchQueryAndFetchAction.java", "status": "modified" }, { "diff": "@@ -26,7 +26,6 @@\n import org.elasticsearch.action.search.ReduceSearchPhaseException;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -54,12 +53,12 @@ public class TransportSearchQueryThenFetchAction extends TransportSearchTypeActi\n @Inject\n public TransportSearchQueryThenFetchAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n SearchServiceTransportAction searchService, SearchPhaseController searchPhaseController,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, threadPool, clusterService, searchService, searchPhaseController, actionFilters, indexNameExpressionResolver);\n+ IndexNameExpressionResolver indexNameExpressionResolver) {\n+ super(settings, threadPool, clusterService, searchService, searchPhaseController, indexNameExpressionResolver);\n }\n \n @Override\n- protected void doExecute(SearchRequest searchRequest, ActionListener<SearchResponse> listener) {\n+ public void execute(SearchRequest searchRequest, ActionListener<SearchResponse> listener) {\n new AsyncAction(searchRequest, listener).start();\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchQueryThenFetchAction.java", "status": "modified" }, { "diff": "@@ -25,13 +25,10 @@\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.NoShardAvailableActionException;\n import org.elasticsearch.action.search.ReduceSearchPhaseException;\n-import org.elasticsearch.action.search.SearchAction;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.ShardSearchFailure;\n-import org.elasticsearch.action.support.ActionFilters;\n-import org.elasticsearch.action.support.TransportAction;\n import org.elasticsearch.action.support.TransportActions;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n@@ -43,6 +40,7 @@\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n import org.elasticsearch.search.SearchPhaseResult;\n@@ -54,9 +52,7 @@\n import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n import org.elasticsearch.search.query.QuerySearchResult;\n import org.elasticsearch.search.query.QuerySearchResultProvider;\n-import org.elasticsearch.tasks.TaskManager;\n import org.elasticsearch.threadpool.ThreadPool;\n-import org.elasticsearch.transport.TransportService;\n \n import java.util.List;\n import java.util.Map;\n@@ -68,23 +64,32 @@\n /**\n *\n */\n-public abstract class TransportSearchTypeAction extends TransportAction<SearchRequest, SearchResponse> {\n+public abstract class TransportSearchTypeAction extends AbstractComponent {\n+\n+ protected final ThreadPool threadPool;\n \n protected final ClusterService clusterService;\n \n protected final SearchServiceTransportAction searchService;\n \n protected final SearchPhaseController searchPhaseController;\n \n+ protected final IndexNameExpressionResolver indexNameExpressionResolver;\n+\n public TransportSearchTypeAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n SearchServiceTransportAction searchService, SearchPhaseController searchPhaseController,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, SearchAction.NAME, threadPool, actionFilters, indexNameExpressionResolver, clusterService.getTaskManager());\n+ IndexNameExpressionResolver indexNameExpressionResolver) {\n+\n+ super(settings);\n+ this.threadPool = threadPool;\n this.clusterService = clusterService;\n this.searchService = searchService;\n this.searchPhaseController = searchPhaseController;\n+ this.indexNameExpressionResolver = indexNameExpressionResolver;\n }\n \n+ public abstract void execute(SearchRequest searchRequest, ActionListener<SearchResponse> listener);\n+\n protected abstract class BaseAsyncAction<FirstResult extends SearchPhaseResult> extends AbstractAsyncAction {\n \n protected final ActionListener<SearchResponse> listener;", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchTypeAction.java", "status": "modified" } ] }
{ "body": "Hi Team,\n\nI have got the following error when using simple_query_string with multiple fields containing numeric field.\nVersion: 2.2.0\nOS: Windows\n\n**Sample data:**\n`curl -XPUT http://localhost:9200/blog/post/1 -d '{\"foo\":123, \"bar\":\"xyzzy\"}'`\n`curl -XPUT http://localhost:9200/blog/post/2 -d '{\"foo\":456, \"bar\":\"xyzzy\"}'`\n\n**Use simple_query_string with multiple fields**\n`curl -XGET http://localhost:9200/blog/post/_search?pretty=1 -d '{\"query\":{\"simple_query_string\":{\"query\":\"123\",\"fields\":[\"foo\",\"bar\"]}}}'`\n\n**Error**\n`{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"Illegal shift value, must be 0..63; got shift=2147483647\"\n }\n ],\n \"type\": \"search_phase_execution_exception\",\n \"reason\": \"all shards failed\",\n \"phase\": \"query\",\n \"grouped\": true,\n \"failed_shards\": [\n {\n \"shard\": 0,\n \"index\": \"blog\",\n \"node\": \"6jVhZCw3Tau3cP5VtEg8Tw\",\n \"reason\": {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"Illegal shift value, must be 0..63; got shift=2147483647\"\n }\n }\n ]\n },\n \"status\": 400\n}`\n\n**multi_match query works with the same fields**\n`curl -XGET http://localhost:9200/blog/post/_search?pretty=1 -d '{\"query\":{\"bool\":{\"must\":[{\"multi_match\":{\"query\":\"123\",\"type\":\"cross_fields\",\"fields\":[\"foo\",\"bar\"],\"operator\":\"and\"}}]}}}'` \n\nIssue similar to #15860\n", "comments": [ { "body": "I think what happens is the string value is handled by the hidden analyzer for long fields, which expects a Long. We should be using the field type for each field to call the `value` function, which in the case of `long` field type will call `Long.parseLong`. @dakrone thoughts?\n", "created_at": "2016-02-10T18:00:08Z" }, { "body": "@rjernst yeah, I think that is a good idea. I will look into fixing this.\n", "created_at": "2016-02-11T20:43:15Z" }, { "body": "Pushed a fix for this to the 2.2 branch that will be in 2.2.1, and it has been fixed in Lucene 5.5, which will be in the 2.3+ releases of ES\n", "created_at": "2016-02-18T23:27:43Z" } ], "number": 16577, "title": "simple_query_string gives java.lang.IllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647" }
{ "body": "For Numeric types, if the query's text is passed to create a boolean\nquery, the 'Long' analyzer can return an exception similar to:\n\n```\nIllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647\n```\n\nThis change looks up the `MappedFieldType` for the specified fields (if\navailable) and uses the `.termQuery` function to create the query from\nthe string, instead of analyzing it by creating a new boolean query by\ndefault.\n\nResolves #16577\n\nThis is fixed in 2.3+ by Lucene 5.5\n", "number": 16686, "review_comments": [], "title": "Use MappedFieldType.termQuery to generate simple_query_string queries" }
{ "commits": [ { "message": "Use MappedFieldType.termQuery to generate simple_query_string queries\n\nFor Numeric types, if the query's text is passed to create a boolean\nquery, the 'Long' analyzer can return an exception similar to:\n\n```\nIllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647\n```\n\nThis change looks up the `MappedFieldType` for the specified fields (if\navailable) and uses the `.termQuery` function to create the query from\nthe string, instead of analyzing it by creating a new boolean query by\ndefault.\n\nResolves #16577\n\nThis is fixed in 2.3+ by Lucene 5.5" } ], "files": [ { "diff": "@@ -362,7 +362,7 @@ private void checkFieldUniqueness(String type, Collection<ObjectMapper> objectMa\n // Before 3.0 some metadata mappers are also registered under the root object mapper\n // So we avoid false positives by deduplicating mappers\n // given that we check exact equality, this would still catch the case that a mapper\n- // is defined under the root object \n+ // is defined under the root object\n Collection<FieldMapper> uniqueFieldMappers = Collections.newSetFromMap(new IdentityHashMap<FieldMapper, Boolean>());\n uniqueFieldMappers.addAll(fieldMappers);\n fieldMappers = uniqueFieldMappers;", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.*;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n \n import java.io.IOException;\n import java.util.Locale;\n@@ -37,11 +38,15 @@\n public class SimpleQueryParser extends org.apache.lucene.queryparser.simple.SimpleQueryParser {\n \n private final Settings settings;\n+ private final Map<String, MappedFieldType> fieldToType;\n \n /** Creates a new parser with custom flags used to enable/disable certain features. */\n- public SimpleQueryParser(Analyzer analyzer, Map<String, Float> weights, int flags, Settings settings) {\n+ public SimpleQueryParser(Analyzer analyzer, Map<String, Float> weights,\n+ Map<String, MappedFieldType> fieldToType, int flags,\n+ Settings settings) {\n super(analyzer, weights, flags);\n this.settings = settings;\n+ this.fieldToType = fieldToType;\n }\n \n /**\n@@ -60,7 +65,15 @@ public Query newDefaultQuery(String text) {\n bq.setDisableCoord(true);\n for (Map.Entry<String,Float> entry : weights.entrySet()) {\n try {\n- Query q = createBooleanQuery(entry.getKey(), text, super.getDefaultOperator());\n+ Query q = null;\n+ MappedFieldType mpt = fieldToType.get(entry.getKey());\n+ if (mpt != null && mpt.isNumeric()) {\n+ // If the field is numeric, it needs to use a different query type instead of\n+ // trying to analyze a 'string' as a 'long\n+ q = mpt.termQuery(text, null);\n+ } else {\n+ q = createBooleanQuery(entry.getKey(), text, super.getDefaultOperator());\n+ }\n if (q != null) {\n q.setBoost(entry.getValue());\n bq.add(q, BooleanClause.Occur.SHOULD);", "filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.common.util.LocaleUtils;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.MapperService;\n \n import java.io.IOException;\n import java.util.Collections;\n@@ -88,7 +89,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n String currentFieldName = null;\n String queryBody = null;\n- float boost = 1.0f; \n+ float boost = 1.0f;\n String queryName = null;\n String minimumShouldMatch = null;\n Map<String, Float> fieldsAndWeights = null;\n@@ -203,7 +204,18 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n if (fieldsAndWeights == null) {\n fieldsAndWeights = Collections.singletonMap(parseContext.defaultField(), 1.0F);\n }\n- SimpleQueryParser sqp = new SimpleQueryParser(analyzer, fieldsAndWeights, flags, sqsSettings);\n+\n+ // Fetch each mapped type for the fields specified\n+ Map<String, MappedFieldType> fieldToType = new HashMap<>();\n+ MapperService ms = parseContext.mapperService();\n+ for (String fieldName : fieldsAndWeights.keySet()) {\n+ MappedFieldType mapping = ms.fullName(fieldName);\n+ if (mapping != null) {\n+ fieldToType.put(fieldName, mapping);\n+ }\n+ }\n+\n+ SimpleQueryParser sqp = new SimpleQueryParser(analyzer, fieldsAndWeights, fieldToType, flags, sqsSettings);\n \n if (defaultOperator != null) {\n sqp.setDefaultOperator(defaultOperator);", "filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringParser.java", "status": "modified" }, { "diff": "@@ -167,6 +167,19 @@ public void testSimpleQueryStringLowercasing() {\n assertHitCount(searchResponse, 0l);\n }\n \n+ // See: https://github.com/elastic/elasticsearch/issues/16577\n+ public void testSimpleQueryStringUsesFieldAnalyzer() throws Exception {\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"foo\", 123, \"bar\", \"abc\").get();\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"foo\", 234, \"bar\", \"bcd\").get();\n+\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryStringQuery(\"123\").field(\"foo\").field(\"bar\")).get();\n+ assertHitCount(searchResponse, 1L);\n+ assertSearchHits(searchResponse, \"1\");\n+ }\n+\n @Test\n public void testQueryStringLocale() {\n createIndex(\"test\");", "filename": "core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java", "status": "modified" } ] }
{ "body": "http://build-us-00.elastic.co/job/es_core_master_metal/12355/\n\n```\nelastic/elasticsearch\nmaster\n\nSUMMARY\n\nPlease direct your attention to the attached stacktraces associated with some or all of these tests:\n\norg.elasticsearch.routing.SimpleRoutingIT testRequiredRoutingMapping\nFAILED: 1\nERROR: 0\nSKIPPED: 80\nTOTAL: 5644\n\nBUILD INFO\n\nBuild 20160212182431-BE4C2D89\nLog https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+periodic/406/console\nDuration 12m 33s (753008ms)\nStarted 2016-02-12T18:24:32.000Z\nEnded 2016-02-12T18:37:05.008Z\nExit Code 1\nHost slave-3740eaed (up 43 days)\nOS Ubuntu 15.04, Linux 3.19.0-42-generic\nSpecs 4 CPUs, 15.67GB RAM\njava.version 1.8.0_45-internal\njava.vm.name OpenJDK 64-Bit Server VM\njava.vm.version 25.45-b02\njava.runtime.version 1.8.0_45-internal-b14\njava.home /usr/lib/jvm/java-8-openjdk-amd64\n```\n\nFailure replicates for me.\n\nDue to https://github.com/elastic/elasticsearch/pull/10136, when a _routing is defined in the mapping, \"Broadcast Deletes\" are no longer allowed. The `testRequiredRoutingMapping` test was updated in the single-delete case so that `isExists(), equalTo(true)` each time through the loop, since the document should not be deleted.\n\nHowever, it looks like the Bulk Delete case wasn't changed, so the test is still trying to verify that the bulk delete \"broadcasted\" despite no _routing being specified:\n\n``` java\nfor (int i = 0; i < 5; i++) {\n try {\n client().prepareGet(indexOrAlias(), \"type1\", \"1\").execute().actionGet().isExists();\n fail();\n } catch (RoutingMissingException e) {\n assertThat(e.status(), equalTo(RestStatus.BAD_REQUEST));\n assertThat(e.getMessage(), equalTo(\"routing is required for [test]/[type1]/[1]\"));\n }\n assertThat(\n client().prepareGet(indexOrAlias(), \"type1\", \"1\")\n .setRouting(\"0\")\n .execute()\n .actionGet()\n .isExists(),\n equalTo(false)); // <-- Here\n}\n```\n\nI assumed this was just a test problem, so I updated it to match the single-delete case (`isExists(), equalTo(true))`. But that begins to fail under other seeds:\n\n```\ngradle :core:integTest -Dtests.seed=42425D480F1F72E8 -Dtests.class=org.elasticsearch.routing.SimpleRoutingIT -Dtests.method=\"testRequiredRoutingMapping\" -Dtests.locale=es-CL -Dtests.timezone=AET\n```\n\nSo either:\n1. The test was correct, Bulk broadcast deletes without _routing are allowed, and the document isn't being deleted for some reason\n2. Or the test is incorrect, bulk broadcast deletes _shouldn't_ be allowed, but they are working anyway under certain conditions.\n\n/cc @javanna any thoughts?\n", "comments": [ { "body": "> Or the test is incorrect, bulk broadcast deletes shouldn't be allowed, but they are working anyway under certain conditions.\n\nI'd say the test is incorrect. Could they be working when the custom routing just happens to map to the same shard as default routing would?\n", "created_at": "2016-02-13T11:56:56Z" }, { "body": "Wow, the change was made almost a year ago, yet I see there is something missing in bulk. Seems like bulk has always worked differently around routing required hence why the test was left unchanged. This looks like an actual bug, digging and coming up with a PR.\n", "created_at": "2016-02-15T09:43:45Z" }, { "body": "I did some digging, here are my findings. \n\nAs part of #10136 I should have removed broadcast delete from bulk as well, but I didn't realize it had a completely different code path for bulk, so it was left behind, which is why the test was still testing the broadcast delete.\n\nThe leftover broadcast delete for bulk is broken in many ways though:\n- one delete item becomes multiple delete requests, one per shard. but the response item is only one, so if one fails, you'll see the failure in the bulk response only if it was the last one returned from the shards.\n- the same delete request object was reused throughout the different shards. As a result, the version in the request was updated with the result of the last delete executed, which affects the following delete operations (if the shards are on the same node) by setting a version that is not `-3` (match_any) but rather `1`. This may cause stuff like `VersionConflictEngineException[[type1][1]: version conflict, current version [2] is higher or equal to the one provided [1]]` depending on the execution order and the number of shards.\n\nI have no idea why this test started failing only recently, but the reason why it failed was that in some cases the broadcast delete on the shard that contained the document to delete caused a version conflict, which may get returned or not as part of the bulk response (anyways the response wasn't checked in the test). I think the follow-up is simply to remove any leftover of broadcast delete and return a bulk item failure in case routing is required but it is not specified.\n", "created_at": "2016-02-15T15:22:36Z" }, { "body": "This code is what role ?Thanks.\n", "created_at": "2016-02-22T07:28:29Z" } ], "number": 16645, "title": "SimpleRoutingIT.testRequiredRoutingMapping fails because a doc remains undeleted" }
{ "body": "As part of #10136 we removed the transport action for broadcast deletes in case routing is required but not specified. Bulk api worked differently though and kept on doing the broadcast delete internally in that case. This commit makes sure that delete items are marked as failed in such cases. Also the check has been moved up in the code together with the existing check for the update api, and we now make sure that the exception is the same as the one thrown for single document apis (delete/update).\n\nNote that the failure for the update api contained the wrong optype (the type of the document rather than \"update\"), that's been fixed too and tested.\n\nCloses #16645\n", "number": 16675, "review_comments": [ { "body": "this looks scary, whin can it happen?\n", "created_at": "2016-02-26T15:35:50Z" }, { "body": "should we do `else throw new AssertionError();`?\n", "created_at": "2016-02-26T15:36:34Z" }, { "body": "oh I see, because we set the request to null in case of validation issues. Can you leave a comment about this?\n", "created_at": "2016-02-26T15:37:51Z" }, { "body": "it does happen, I only made it explicit and scary, before we were relying on the instanceof check to return false, which happened only when we set the request to false. I can add a comment, sure!\n", "created_at": "2016-02-26T15:48:15Z" }, { "body": "why not, sounds good to me, will do\n", "created_at": "2016-02-26T15:48:44Z" } ], "title": "Bulk api: fail deletes when routing is required but not specified" }
{ "commits": [ { "message": "Bulk api: fail deletes when routing is required but not specified\n\nAs part of #10136 we removed the transport action for broadcast deletes in case routing is required but not specified. Bulk api worked differently though and kept on doing the broadcast delete internally in that case. This commit makes sure that delete items are marked as failed in such cases. Also the check has been moved up in the code together with the existing check for the update api, and we now make sure that the exception is the same as the one thrown for single document apis (delete/update).\n\nNote that the failure for the update api contained the wrong optype (the type of the document rather than \"update\"), that's been fixed too and tested.\n\nCloses #16645" }, { "message": "remove boolean return type from resolveRequest and needless listener arg" }, { "message": "[TEST] remove needless index operation in SimpleRoutingIT" }, { "message": "Excpetion => Exception" } ], "files": [ { "diff": "@@ -30,10 +30,12 @@\n import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction;\n import org.elasticsearch.action.delete.DeleteRequest;\n+import org.elasticsearch.action.delete.TransportDeleteAction;\n import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.AutoCreateIndex;\n import org.elasticsearch.action.support.HandledTransportAction;\n+import org.elasticsearch.action.update.TransportUpdateAction;\n import org.elasticsearch.action.update.UpdateRequest;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n@@ -42,12 +44,9 @@\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n-import org.elasticsearch.cluster.routing.GroupShardsIterator;\n-import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n-import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndexAlreadyExistsException;\n@@ -197,10 +196,10 @@ private boolean setResponseFailureIfIndexMatches(AtomicArray<BulkItemResponse> r\n */\n public void executeBulk(final BulkRequest bulkRequest, final ActionListener<BulkResponse> listener) {\n final long startTime = System.currentTimeMillis();\n- executeBulk(bulkRequest, startTime, listener, new AtomicArray<BulkItemResponse>(bulkRequest.requests.size()));\n+ executeBulk(bulkRequest, startTime, listener, new AtomicArray<>(bulkRequest.requests.size()));\n }\n \n- private final long buildTookInMillis(long startTime) {\n+ private long buildTookInMillis(long startTime) {\n // protect ourselves against time going backwards\n return Math.max(1, System.currentTimeMillis() - startTime);\n }\n@@ -214,33 +213,53 @@ private void executeBulk(final BulkRequest bulkRequest, final long startTime, fi\n MetaData metaData = clusterState.metaData();\n for (int i = 0; i < bulkRequest.requests.size(); i++) {\n ActionRequest request = bulkRequest.requests.get(i);\n- if (request instanceof DocumentRequest) {\n- DocumentRequest req = (DocumentRequest) request;\n-\n- if (addFailureIfIndexIsUnavailable(req, bulkRequest, responses, i, concreteIndices, metaData)) {\n- continue;\n+ //the request can only be null because we set it to null in the previous step, so it gets ignored\n+ if (request == null) {\n+ continue;\n+ }\n+ DocumentRequest documentRequest = (DocumentRequest) request;\n+ if (addFailureIfIndexIsUnavailable(documentRequest, bulkRequest, responses, i, concreteIndices, metaData)) {\n+ continue;\n+ }\n+ String concreteIndex = concreteIndices.resolveIfAbsent(documentRequest);\n+ if (request instanceof IndexRequest) {\n+ IndexRequest indexRequest = (IndexRequest) request;\n+ MappingMetaData mappingMd = null;\n+ if (metaData.hasIndex(concreteIndex)) {\n+ mappingMd = metaData.index(concreteIndex).mappingOrDefault(indexRequest.type());\n+ }\n+ try {\n+ indexRequest.process(metaData, mappingMd, allowIdGeneration, concreteIndex);\n+ } catch (ElasticsearchParseException | RoutingMissingException e) {\n+ BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex, indexRequest.type(), indexRequest.id(), e);\n+ BulkItemResponse bulkItemResponse = new BulkItemResponse(i, \"index\", failure);\n+ responses.set(i, bulkItemResponse);\n+ // make sure the request gets never processed again\n+ bulkRequest.requests.set(i, null);\n+ }\n+ } else if (request instanceof DeleteRequest) {\n+ try {\n+ TransportDeleteAction.resolveAndValidateRouting(metaData, concreteIndex, (DeleteRequest)request);\n+ } catch(RoutingMissingException e) {\n+ BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex, documentRequest.type(), documentRequest.id(), e);\n+ BulkItemResponse bulkItemResponse = new BulkItemResponse(i, \"delete\", failure);\n+ responses.set(i, bulkItemResponse);\n+ // make sure the request gets never processed again\n+ bulkRequest.requests.set(i, null);\n }\n \n- String concreteIndex = concreteIndices.resolveIfAbsent(req);\n- if (request instanceof IndexRequest) {\n- IndexRequest indexRequest = (IndexRequest) request;\n- MappingMetaData mappingMd = null;\n- if (metaData.hasIndex(concreteIndex)) {\n- mappingMd = metaData.index(concreteIndex).mappingOrDefault(indexRequest.type());\n- }\n- try {\n- indexRequest.process(metaData, mappingMd, allowIdGeneration, concreteIndex);\n- } catch (ElasticsearchParseException | RoutingMissingException e) {\n- BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex, indexRequest.type(), indexRequest.id(), e);\n- BulkItemResponse bulkItemResponse = new BulkItemResponse(i, \"index\", failure);\n- responses.set(i, bulkItemResponse);\n- // make sure the request gets never processed again\n- bulkRequest.requests.set(i, null);\n- }\n- } else {\n- concreteIndices.resolveIfAbsent(req);\n- req.routing(clusterState.metaData().resolveIndexRouting(req.parent(), req.routing(), req.index()));\n+ } else if (request instanceof UpdateRequest) {\n+ try {\n+ TransportUpdateAction.resolveAndValidateRouting(metaData, concreteIndex, (UpdateRequest)request);\n+ } catch(RoutingMissingException e) {\n+ BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex, documentRequest.type(), documentRequest.id(), e);\n+ BulkItemResponse bulkItemResponse = new BulkItemResponse(i, \"update\", failure);\n+ responses.set(i, bulkItemResponse);\n+ // make sure the request gets never processed again\n+ bulkRequest.requests.set(i, null);\n }\n+ } else {\n+ throw new AssertionError(\"request type not supported: [\" + request.getClass().getName() + \"]\");\n }\n }\n \n@@ -262,37 +281,16 @@ private void executeBulk(final BulkRequest bulkRequest, final long startTime, fi\n } else if (request instanceof DeleteRequest) {\n DeleteRequest deleteRequest = (DeleteRequest) request;\n String concreteIndex = concreteIndices.getConcreteIndex(deleteRequest.index());\n- MappingMetaData mappingMd = clusterState.metaData().index(concreteIndex).mappingOrDefault(deleteRequest.type());\n- if (mappingMd != null && mappingMd.routing().required() && deleteRequest.routing() == null) {\n- // if routing is required, and no routing on the delete request, we need to broadcast it....\n- GroupShardsIterator groupShards = clusterService.operationRouting().broadcastDeleteShards(clusterState, concreteIndex);\n- for (ShardIterator shardIt : groupShards) {\n- List<BulkItemRequest> list = requestsByShard.get(shardIt.shardId());\n- if (list == null) {\n- list = new ArrayList<>();\n- requestsByShard.put(shardIt.shardId(), list);\n- }\n- list.add(new BulkItemRequest(i, deleteRequest));\n- }\n- } else {\n- ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, deleteRequest.type(), deleteRequest.id(), deleteRequest.routing()).shardId();\n- List<BulkItemRequest> list = requestsByShard.get(shardId);\n- if (list == null) {\n- list = new ArrayList<>();\n- requestsByShard.put(shardId, list);\n- }\n- list.add(new BulkItemRequest(i, request));\n+ ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, deleteRequest.type(), deleteRequest.id(), deleteRequest.routing()).shardId();\n+ List<BulkItemRequest> list = requestsByShard.get(shardId);\n+ if (list == null) {\n+ list = new ArrayList<>();\n+ requestsByShard.put(shardId, list);\n }\n+ list.add(new BulkItemRequest(i, request));\n } else if (request instanceof UpdateRequest) {\n UpdateRequest updateRequest = (UpdateRequest) request;\n String concreteIndex = concreteIndices.getConcreteIndex(updateRequest.index());\n- MappingMetaData mappingMd = clusterState.metaData().index(concreteIndex).mappingOrDefault(updateRequest.type());\n- if (mappingMd != null && mappingMd.routing().required() && updateRequest.routing() == null) {\n- BulkItemResponse.Failure failure = new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(),\n- updateRequest.id(), new IllegalArgumentException(\"routing is required for this item\"));\n- responses.set(i, new BulkItemResponse(i, updateRequest.type(), failure));\n- continue;\n- }\n ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, updateRequest.type(), updateRequest.id(), updateRequest.routing()).shardId();\n List<BulkItemRequest> list = requestsByShard.get(shardId);\n if (list == null) {", "filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java", "status": "modified" }, { "diff": "@@ -96,23 +96,27 @@ public void onFailure(Throwable e) {\n \n @Override\n protected void resolveRequest(final MetaData metaData, String concreteIndex, DeleteRequest request) {\n+ resolveAndValidateRouting(metaData, concreteIndex, request);\n+ ShardId shardId = clusterService.operationRouting().shardId(clusterService.state(), concreteIndex, request.id(), request.routing());\n+ request.setShardId(shardId);\n+ }\n+\n+ public static void resolveAndValidateRouting(final MetaData metaData, String concreteIndex, DeleteRequest request) {\n request.routing(metaData.resolveIndexRouting(request.parent(), request.routing(), request.index()));\n if (metaData.hasIndex(concreteIndex)) {\n- // check if routing is required, if so, do a broadcast delete\n+ // check if routing is required, if so, throw error if routing wasn't specified\n MappingMetaData mappingMd = metaData.index(concreteIndex).mappingOrDefault(request.type());\n if (mappingMd != null && mappingMd.routing().required()) {\n if (request.routing() == null) {\n if (request.versionType() != VersionType.INTERNAL) {\n // TODO: implement this feature\n throw new IllegalArgumentException(\"routing value is required for deleting documents of type [\" + request.type()\n- + \"] while using version_type [\" + request.versionType() + \"]\");\n+ + \"] while using version_type [\" + request.versionType() + \"]\");\n }\n throw new RoutingMissingException(concreteIndex, request.type(), request.id());\n }\n }\n }\n- ShardId shardId = clusterService.operationRouting().shardId(clusterService.state(), concreteIndex, request.id(), request.routing());\n- request.setShardId(shardId);\n }\n \n private void innerExecute(Task task, final DeleteRequest request, final ActionListener<DeleteResponse> listener) {", "filename": "core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java", "status": "modified" }, { "diff": "@@ -35,10 +35,8 @@\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.logging.LoggerMessageFormat;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.node.NodeClosedException;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.BaseTransportResponseHandler;\n@@ -91,11 +89,11 @@ protected ClusterBlockException checkGlobalBlock(ClusterState state) {\n protected ClusterBlockException checkRequestBlock(ClusterState state, Request request) {\n return state.blocks().indexBlockedException(ClusterBlockLevel.WRITE, request.concreteIndex());\n }\n+\n /**\n- * Resolves the request. If the resolve means a different execution, then return false\n- * here to indicate not to continue and execute this request.\n+ * Resolves the request. Throws an exception if the request cannot be resolved.\n */\n- protected abstract boolean resolveRequest(ClusterState state, Request request, ActionListener<Response> listener);\n+ protected abstract void resolveRequest(ClusterState state, Request request);\n \n protected boolean retryOnFailure(Throwable e) {\n return false;\n@@ -141,11 +139,7 @@ protected void doStart() {\n }\n }\n request.concreteIndex(indexNameExpressionResolver.concreteSingleIndex(observer.observedState(), request));\n- // check if we need to execute, and if not, return\n- if (!resolveRequest(observer.observedState(), request, listener)) {\n- listener.onFailure(new IllegalStateException(LoggerMessageFormat.format(\"[{}][{}] request {} could not be resolved\",request.index, request.shardId, actionName)));\n- return;\n- }\n+ resolveRequest(observer.observedState(), request);\n blockException = checkRequestBlock(observer.observedState(), request);\n if (blockException != null) {\n if (blockException.retryable()) {", "filename": "core/src/main/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationAction.java", "status": "modified" }, { "diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.PlainShardIterator;\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.cluster.routing.ShardRouting;\n@@ -99,13 +100,16 @@ protected boolean retryOnFailure(Throwable e) {\n }\n \n @Override\n- protected boolean resolveRequest(ClusterState state, UpdateRequest request, ActionListener<UpdateResponse> listener) {\n- request.routing((state.metaData().resolveIndexRouting(request.parent(), request.routing(), request.index())));\n+ protected void resolveRequest(ClusterState state, UpdateRequest request) {\n+ resolveAndValidateRouting(state.metaData(), request.concreteIndex(), request);\n+ }\n+\n+ public static void resolveAndValidateRouting(MetaData metaData, String concreteIndex, UpdateRequest request) {\n+ request.routing((metaData.resolveIndexRouting(request.parent(), request.routing(), request.index())));\n // Fail fast on the node that received the request, rather than failing when translating on the index or delete request.\n- if (request.routing() == null && state.getMetaData().routingRequired(request.concreteIndex(), request.type())) {\n- throw new RoutingMissingException(request.concreteIndex(), request.type(), request.id());\n+ if (request.routing() == null && metaData.routingRequired(concreteIndex, request.type())) {\n+ throw new RoutingMissingException(concreteIndex, request.type(), request.id());\n }\n- return true;\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/update/TransportUpdateAction.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.math.MathUtils;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.ShardNotFoundException;\n@@ -67,10 +66,6 @@ public ShardIterator getShards(ClusterState clusterState, String index, int shar\n return preferenceActiveShardIterator(indexShard, clusterState.nodes().localNodeId(), clusterState.nodes(), preference);\n }\n \n- public GroupShardsIterator broadcastDeleteShards(ClusterState clusterState, String index) {\n- return indexRoutingTable(clusterState, index).groupByShardsIt();\n- }\n-\n public int searchShardsCount(ClusterState clusterState, String[] concreteIndices, @Nullable Map<String, Set<String>> routing) {\n final Set<IndexShardRoutingTable> shards = computeTargetedShards(clusterState, concreteIndices, routing);\n return shards.size();", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java", "status": "modified" }, { "diff": "@@ -108,8 +108,7 @@ protected Response newResponse() {\n }\n \n @Override\n- protected boolean resolveRequest(ClusterState state, Request request, ActionListener<Response> listener) {\n- return true;\n+ protected void resolveRequest(ClusterState state, Request request) {\n }\n \n @Override\n@@ -230,7 +229,7 @@ public void testSuccessAfterRetryWithClusterStateUpdate() throws Exception {\n listener.get();\n }\n \n- public void testSuccessAfterRetryWithExcpetionFromTransport() throws Exception {\n+ public void testSuccessAfterRetryWithExceptionFromTransport() throws Exception {\n Request request = new Request().index(\"test\");\n request.shardId = 0;\n PlainActionFuture<Response> listener = new PlainActionFuture<>();\n@@ -290,13 +289,13 @@ public void testUnresolvableRequestDoesNotHang() throws InterruptedException, Ex\n Settings.EMPTY,\n \"indices:admin/test_unresolvable\",\n transportService,\n- new ActionFilters(new HashSet<ActionFilter>()),\n+ new ActionFilters(new HashSet<>()),\n new MyResolver(),\n Request::new\n ) {\n @Override\n- protected boolean resolveRequest(ClusterState state, Request request, ActionListener<Response> listener) {\n- return false;\n+ protected void resolveRequest(ClusterState state, Request request) {\n+ throw new IllegalStateException(\"request cannot be resolved\");\n }\n };\n Request request = new Request().index(\"test\");", "filename": "core/src/test/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationActionTests.java", "status": "modified" }, { "diff": "@@ -20,24 +20,26 @@\n package org.elasticsearch.routing;\n \n import org.elasticsearch.ElasticsearchException;\n-import org.elasticsearch.Version;\n import org.elasticsearch.action.RoutingMissingException;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n+import org.elasticsearch.action.bulk.BulkItemResponse;\n+import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.explain.ExplainResponse;\n+import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.get.MultiGetRequest;\n import org.elasticsearch.action.get.MultiGetResponse;\n import org.elasticsearch.action.termvectors.MultiTermVectorsResponse;\n import org.elasticsearch.action.termvectors.TermVectorsRequest;\n import org.elasticsearch.action.termvectors.TermVectorsResponse;\n+import org.elasticsearch.action.update.UpdateRequest;\n import org.elasticsearch.action.update.UpdateResponse;\n import org.elasticsearch.client.Requests;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.test.ESIntegTestCase;\n \n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.nullValue;\n@@ -156,8 +158,7 @@ public void testSimpleSearchRouting() {\n }\n }\n \n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/16645\")\n- public void testRequiredRoutingMapping() throws Exception {\n+ public void testRequiredRoutingCrudApis() throws Exception {\n client().admin().indices().prepareCreate(\"test\").addAlias(new Alias(\"alias\"))\n .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"_routing\").field(\"required\", true).endObject().endObject().endObject())\n .execute().actionGet();\n@@ -199,13 +200,31 @@ public void testRequiredRoutingMapping() throws Exception {\n assertThat(client().prepareGet(indexOrAlias(), \"type1\", \"1\").setRouting(\"0\").execute().actionGet().isExists(), equalTo(true));\n }\n \n- logger.info(\"--> indexing with id [1], and routing [0]\");\n- client().prepareIndex(indexOrAlias(), \"type1\", \"1\").setRouting(\"0\").setSource(\"field\", \"value1\").setRefresh(true).execute().actionGet();\n- logger.info(\"--> verifying get with no routing, should not find anything\");\n+ try {\n+ client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setDoc(\"field\", \"value2\").execute().actionGet();\n+ fail(\"update with missing routing when routing is required should fail\");\n+ } catch(ElasticsearchException e) {\n+ assertThat(e.unwrapCause(), instanceOf(RoutingMissingException.class));\n+ }\n \n- logger.info(\"--> bulk deleting with no routing, should broadcast the delete since _routing is required\");\n- client().prepareBulk().add(Requests.deleteRequest(indexOrAlias()).type(\"type1\").id(\"1\")).execute().actionGet();\n+ client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setRouting(\"0\").setDoc(\"field\", \"value2\").execute().actionGet();\n client().admin().indices().prepareRefresh().execute().actionGet();\n+\n+ for (int i = 0; i < 5; i++) {\n+ try {\n+ client().prepareGet(indexOrAlias(), \"type1\", \"1\").execute().actionGet().isExists();\n+ fail();\n+ } catch (RoutingMissingException e) {\n+ assertThat(e.status(), equalTo(RestStatus.BAD_REQUEST));\n+ assertThat(e.getMessage(), equalTo(\"routing is required for [test]/[type1]/[1]\"));\n+ }\n+ GetResponse getResponse = client().prepareGet(indexOrAlias(), \"type1\", \"1\").setRouting(\"0\").execute().actionGet();\n+ assertThat(getResponse.isExists(), equalTo(true));\n+ assertThat(getResponse.getSourceAsMap().get(\"field\"), equalTo(\"value2\"));\n+ }\n+\n+ client().prepareDelete(indexOrAlias(), \"type1\", \"1\").setRouting(\"0\").setRefresh(true).execute().actionGet();\n+\n for (int i = 0; i < 5; i++) {\n try {\n client().prepareGet(indexOrAlias(), \"type1\", \"1\").execute().actionGet().isExists();\n@@ -227,28 +246,72 @@ public void testRequiredRoutingBulk() throws Exception {\n .execute().actionGet();\n ensureGreen();\n \n- logger.info(\"--> indexing with id [1], and routing [0]\");\n- client().prepareBulk().add(\n- client().prepareIndex(indexOrAlias(), \"type1\", \"1\").setRouting(\"0\").setSource(\"field\", \"value1\")).execute().actionGet();\n- client().admin().indices().prepareRefresh().execute().actionGet();\n+ {\n+ BulkResponse bulkResponse = client().prepareBulk().add(Requests.indexRequest(indexOrAlias()).type(\"type1\").id(\"1\")\n+ .source(\"field\", \"value\")).execute().actionGet();\n+ assertThat(bulkResponse.getItems().length, equalTo(1));\n+ assertThat(bulkResponse.hasFailures(), equalTo(true));\n+\n+ for (BulkItemResponse bulkItemResponse : bulkResponse) {\n+ assertThat(bulkItemResponse.isFailed(), equalTo(true));\n+ assertThat(bulkItemResponse.getOpType(), equalTo(\"index\"));\n+ assertThat(bulkItemResponse.getFailure().getStatus(), equalTo(RestStatus.BAD_REQUEST));\n+ assertThat(bulkItemResponse.getFailure().getCause(), instanceOf(RoutingMissingException.class));\n+ assertThat(bulkItemResponse.getFailureMessage(), containsString(\"routing is required for [test]/[type1]/[1]\"));\n+ }\n+ }\n \n- logger.info(\"--> verifying get with no routing, should fail\");\n- for (int i = 0; i < 5; i++) {\n- try {\n- client().prepareGet(indexOrAlias(), \"type1\", \"1\").execute().actionGet().isExists();\n- fail();\n- } catch (RoutingMissingException e) {\n- assertThat(e.status(), equalTo(RestStatus.BAD_REQUEST));\n- assertThat(e.getMessage(), equalTo(\"routing is required for [test]/[type1]/[1]\"));\n+ {\n+ BulkResponse bulkResponse = client().prepareBulk().add(Requests.indexRequest(indexOrAlias()).type(\"type1\").id(\"1\").routing(\"0\")\n+ .source(\"field\", \"value\")).execute().actionGet();\n+ assertThat(bulkResponse.hasFailures(), equalTo(false));\n+ }\n+\n+ {\n+ BulkResponse bulkResponse = client().prepareBulk().add(new UpdateRequest(indexOrAlias(), \"type1\", \"1\").doc(\"field\", \"value2\"))\n+ .execute().actionGet();\n+ assertThat(bulkResponse.getItems().length, equalTo(1));\n+ assertThat(bulkResponse.hasFailures(), equalTo(true));\n+\n+ for (BulkItemResponse bulkItemResponse : bulkResponse) {\n+ assertThat(bulkItemResponse.isFailed(), equalTo(true));\n+ assertThat(bulkItemResponse.getOpType(), equalTo(\"update\"));\n+ assertThat(bulkItemResponse.getFailure().getStatus(), equalTo(RestStatus.BAD_REQUEST));\n+ assertThat(bulkItemResponse.getFailure().getCause(), instanceOf(RoutingMissingException.class));\n+ assertThat(bulkItemResponse.getFailureMessage(), containsString(\"routing is required for [test]/[type1]/[1]\"));\n }\n }\n- logger.info(\"--> verifying get with routing, should find\");\n- for (int i = 0; i < 5; i++) {\n- assertThat(client().prepareGet(indexOrAlias(), \"type1\", \"1\").setRouting(\"0\").execute().actionGet().isExists(), equalTo(true));\n+\n+ {\n+ BulkResponse bulkResponse = client().prepareBulk().add(new UpdateRequest(indexOrAlias(), \"type1\", \"1\").doc(\"field\", \"value2\")\n+ .routing(\"0\")).execute().actionGet();\n+ assertThat(bulkResponse.hasFailures(), equalTo(false));\n+ }\n+\n+ {\n+ BulkResponse bulkResponse = client().prepareBulk().add(Requests.deleteRequest(indexOrAlias()).type(\"type1\").id(\"1\"))\n+ .execute().actionGet();\n+ assertThat(bulkResponse.getItems().length, equalTo(1));\n+ assertThat(bulkResponse.hasFailures(), equalTo(true));\n+\n+ for (BulkItemResponse bulkItemResponse : bulkResponse) {\n+ assertThat(bulkItemResponse.isFailed(), equalTo(true));\n+ assertThat(bulkItemResponse.getOpType(), equalTo(\"delete\"));\n+ assertThat(bulkItemResponse.getFailure().getStatus(), equalTo(RestStatus.BAD_REQUEST));\n+ assertThat(bulkItemResponse.getFailure().getCause(), instanceOf(RoutingMissingException.class));\n+ assertThat(bulkItemResponse.getFailureMessage(), containsString(\"routing is required for [test]/[type1]/[1]\"));\n+ }\n+ }\n+\n+ {\n+ BulkResponse bulkResponse = client().prepareBulk().add(Requests.deleteRequest(indexOrAlias()).type(\"type1\").id(\"1\")\n+ .routing(\"0\")).execute().actionGet();\n+ assertThat(bulkResponse.getItems().length, equalTo(1));\n+ assertThat(bulkResponse.hasFailures(), equalTo(false));\n }\n }\n \n- public void testRequiredRoutingMapping_variousAPIs() throws Exception {\n+ public void testRequiredRoutingMappingVariousAPIs() throws Exception {\n client().admin().indices().prepareCreate(\"test\").addAlias(new Alias(\"alias\"))\n .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"_routing\").field(\"required\", true).endObject().endObject().endObject())\n .execute().actionGet();", "filename": "core/src/test/java/org/elasticsearch/routing/SimpleRoutingIT.java", "status": "modified" } ] }
{ "body": "Hi Team,\n\nI have got the following error when using simple_query_string with multiple fields containing numeric field.\nVersion: 2.2.0\nOS: Windows\n\n**Sample data:**\n`curl -XPUT http://localhost:9200/blog/post/1 -d '{\"foo\":123, \"bar\":\"xyzzy\"}'`\n`curl -XPUT http://localhost:9200/blog/post/2 -d '{\"foo\":456, \"bar\":\"xyzzy\"}'`\n\n**Use simple_query_string with multiple fields**\n`curl -XGET http://localhost:9200/blog/post/_search?pretty=1 -d '{\"query\":{\"simple_query_string\":{\"query\":\"123\",\"fields\":[\"foo\",\"bar\"]}}}'`\n\n**Error**\n`{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"Illegal shift value, must be 0..63; got shift=2147483647\"\n }\n ],\n \"type\": \"search_phase_execution_exception\",\n \"reason\": \"all shards failed\",\n \"phase\": \"query\",\n \"grouped\": true,\n \"failed_shards\": [\n {\n \"shard\": 0,\n \"index\": \"blog\",\n \"node\": \"6jVhZCw3Tau3cP5VtEg8Tw\",\n \"reason\": {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"Illegal shift value, must be 0..63; got shift=2147483647\"\n }\n }\n ]\n },\n \"status\": 400\n}`\n\n**multi_match query works with the same fields**\n`curl -XGET http://localhost:9200/blog/post/_search?pretty=1 -d '{\"query\":{\"bool\":{\"must\":[{\"multi_match\":{\"query\":\"123\",\"type\":\"cross_fields\",\"fields\":[\"foo\",\"bar\"],\"operator\":\"and\"}}]}}}'` \n\nIssue similar to #15860\n", "comments": [ { "body": "I think what happens is the string value is handled by the hidden analyzer for long fields, which expects a Long. We should be using the field type for each field to call the `value` function, which in the case of `long` field type will call `Long.parseLong`. @dakrone thoughts?\n", "created_at": "2016-02-10T18:00:08Z" }, { "body": "@rjernst yeah, I think that is a good idea. I will look into fixing this.\n", "created_at": "2016-02-11T20:43:15Z" }, { "body": "Pushed a fix for this to the 2.2 branch that will be in 2.2.1, and it has been fixed in Lucene 5.5, which will be in the 2.3+ releases of ES\n", "created_at": "2016-02-18T23:27:43Z" } ], "number": 16577, "title": "simple_query_string gives java.lang.IllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647" }
{ "body": "For Numeric types, if the query's text is passed to create a boolean\nquery, the 'Long' analyzer can return an exception similar to:\n\n```\nIllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647\n```\n\nThis change looks up the `MappedFieldType` for the specified fields (if\navailable) and uses the `.termQuery` function to create the query from\nthe string, instead of analyzing it by creating a new boolean query by\ndefault.\n\nResolves #16577\n", "number": 16643, "review_comments": [], "title": "Use MappedFieldType.termQuery to generate simple_query_string queries" }
{ "commits": [ { "message": "Use MappedFieldType.termQuery to generate simple_query_string queries\n\nFor Numeric types, if the query's text is passed to create a boolean\nquery, the 'Long' analyzer can return an exception similar to:\n\n```\nIllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647\n```\n\nThis change looks up the `MappedFieldType` for the specified fields (if\navailable) and uses the `.termQuery` function to create the query from\nthe string, instead of analyzing it by creating a new boolean query by\ndefault.\n\nResolves #16577" } ], "files": [ { "diff": "@@ -458,6 +458,17 @@ public MappedFieldType fullName(String fullName) {\n return fieldTypes.get(fullName);\n }\n \n+ /**\n+ * Returns the {@link MappedFieldType} for the give fullName.\n+ *\n+ * If multiple types have fields with the same full name, the first is returned.\n+ *\n+ * This is an alias to make {@code fullName} easier to find\n+ */\n+ public MappedFieldType getFieldForName(String fullName) {\n+ return this.fullName(fullName);\n+ }\n+\n /**\n * Returns all the fields that match the given pattern. If the pattern is prefixed with a type\n * then the fields will be returned with a type prefix.", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.apache.lucene.search.PrefixQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n \n import java.io.IOException;\n import java.util.Locale;\n@@ -43,11 +44,14 @@\n public class SimpleQueryParser extends org.apache.lucene.queryparser.simple.SimpleQueryParser {\n \n private final Settings settings;\n+ private final Map<String, MappedFieldType> fieldToType;\n \n /** Creates a new parser with custom flags used to enable/disable certain features. */\n- public SimpleQueryParser(Analyzer analyzer, Map<String, Float> weights, int flags, Settings settings) {\n+ public SimpleQueryParser(Analyzer analyzer, Map<String, Float> weights, Map<String, MappedFieldType> fieldToType,\n+ int flags, Settings settings) {\n super(analyzer, weights, flags);\n this.settings = settings;\n+ this.fieldToType = fieldToType;\n }\n \n /**\n@@ -66,7 +70,15 @@ public Query newDefaultQuery(String text) {\n bq.setDisableCoord(true);\n for (Map.Entry<String,Float> entry : weights.entrySet()) {\n try {\n- Query q = createBooleanQuery(entry.getKey(), text, super.getDefaultOperator());\n+ Query q;\n+ MappedFieldType mpt = fieldToType.get(entry.getKey());\n+ if (mpt != null && mpt.isNumeric()) {\n+ // If the field is numeric, it needs to use a different query type instead of trying to analyze a 'string' as a 'long\n+ q = mpt.termQuery(text, null);\n+ } else {\n+ q = createBooleanQuery(entry.getKey(), text, super.getDefaultOperator());\n+ }\n+\n if (q != null) {\n bq.add(wrapWithBoost(q, entry.getValue()), BooleanClause.Occur.SHOULD);\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper;\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.Strings;\n@@ -29,6 +30,7 @@\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.query.SimpleQueryParser.Settings;\n \n import java.io.IOException;\n@@ -269,19 +271,28 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n }\n \n // Use standard analyzer by default if none specified\n- Analyzer luceneAnalyzer;\n+ Analyzer defaultAnalyzer;\n if (analyzer == null) {\n- luceneAnalyzer = context.getMapperService().searchAnalyzer();\n+ defaultAnalyzer = context.getMapperService().searchAnalyzer();\n } else {\n- luceneAnalyzer = context.getAnalysisService().analyzer(analyzer);\n- if (luceneAnalyzer == null) {\n+ defaultAnalyzer = context.getAnalysisService().analyzer(analyzer);\n+ if (defaultAnalyzer == null) {\n throw new QueryShardException(context, \"[\" + SimpleQueryStringBuilder.NAME + \"] analyzer [\" + analyzer\n + \"] not found\");\n }\n+ }\n \n+ // Fetch each mapped type for the fields specified\n+ Map<String, MappedFieldType> fieldToType = new HashMap<>();\n+ MapperService ms = context.getMapperService();\n+ for (String fieldName : resolvedFieldsAndWeights.keySet()) {\n+ MappedFieldType mapping = ms.getFieldForName(fieldName);\n+ if (mapping != null) {\n+ fieldToType.put(fieldName, mapping);\n+ }\n }\n \n- SimpleQueryParser sqp = new SimpleQueryParser(luceneAnalyzer, resolvedFieldsAndWeights, flags, settings);\n+ SimpleQueryParser sqp = new SimpleQueryParser(defaultAnalyzer, resolvedFieldsAndWeights, fieldToType, flags, settings);\n sqp.setDefaultOperator(defaultOperator.toBooleanClauseOccur());\n \n Query query = sqp.parse(queryText);\n@@ -394,4 +405,3 @@ protected boolean doEquals(SimpleQueryStringBuilder other) {\n && Objects.equals(settings, other.settings) && (flags == other.flags);\n }\n }\n-", "filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.query.BoolQueryBuilder;\n import org.elasticsearch.index.query.Operator;\n@@ -30,7 +31,9 @@\n import org.elasticsearch.test.ESIntegTestCase;\n \n import java.io.IOException;\n+import java.util.HashMap;\n import java.util.Locale;\n+import java.util.Map;\n import java.util.concurrent.ExecutionException;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n@@ -100,6 +103,22 @@ public void testSimpleQueryString() throws ExecutionException, InterruptedExcept\n assertSearchHits(searchResponse, \"5\", \"6\");\n }\n \n+ // See: https://github.com/elastic/elasticsearch/issues/16577\n+ public void testSimpleQueryStringUsesFieldAnalyzer() throws Exception {\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"foo\", 123, \"bar\", \"abc\").get();\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"foo\", 234, \"bar\", \"bcd\").get();\n+\n+ refresh();\n+\n+ Map<String, Float> fields = new HashMap<>();\n+ fields.put(\"foo\", 1.0f);\n+ fields.put(\"bar\", 1.0f);\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryStringQuery(\"123\").fields(fields)).get();\n+ assertHitCount(searchResponse, 1L);\n+ assertSearchHits(searchResponse, \"1\");\n+ }\n+\n public void testSimpleQueryStringMinimumShouldMatch() throws Exception {\n createIndex(\"test\");\n ensureGreen(\"test\");", "filename": "core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java", "status": "modified" } ] }
{ "body": "It seems that the order of the keys in the search body matter when executing a function_score query. Consider these 2 cURL requests:\n\n```\n curl -d '{\n \"query\":{\n \"function_score\":{\n \"random_score\":{},\n \"query\":{\n \"bool\":{\n \"must\":{\"match\":{\"location\":\"London\"}}},\n \"filter\":{\"bool\":{\n \"must\":[\n {\"match\":{\"location\":\"London\"}},\n {\"term\":{\"state\":\"approved\"}},\n {\"range\":{\"num_reviews\":{\"gte\":3}}}\n ]\n }\n }\n }\n }\n },\n \"size\": 1\n }' \"http://localhost:9200/agent_profiles_development/_search?pretty\"\n\nIn this case, size is ignored and the default of 10 is used. If we move the `size` key as so:\n```\n\n```\ncurl -d '{\n \"size\": 1,\n \"query\":{\n \"function_score\":{\n \"random_score\":{},\n \"query\":{\n \"bool\":{\n \"must\":{\"match\":{\"location\":\"London\"}}},\n \"filter\":{\"bool\":{\n \"must\":[\n {\"match\":{\"location\":\"London\"}},\n {\"term\":{\"state\":\"approved\"}},\n {\"range\":{\"num_reviews\":{\"gte\":3}}}\n ]\n }\n }\n }\n }\n }\n }' \"http://localhost:9200/agent_profiles_development/_search?pretty\"\n```\n\nThen it works as expected.\n", "comments": [ { "body": "This works fine for me (with a simple match all query). Can you give a recreation (putting documents and running the two queries)? Do you see the same problem when using a simple query like `match_all`?\n", "created_at": "2016-02-10T17:45:43Z" }, { "body": "Just tested with match_all, works as expected. Also works when just doing the `bool` query, without the `function_score`. Seems to be the combination of the 2.\n\nI'll build out a test case and post it.\n", "created_at": "2016-02-10T17:58:40Z" }, { "body": "I was able to reproduce this. The query is malformed, the first must clause ends up closing the whole bool query (one too many curly brackets). Our parsers should throw an error though instead of going ahead. As a result, the second bool query replaces the previous query as part of the function score, and size gets ignored. I will work on making parsing stricter so that an error is thrown instead.\n", "created_at": "2016-02-11T14:04:34Z" }, { "body": "Ha, so I was doing something wrong in there :). \n\nThanks for checking that out!\n", "created_at": "2016-02-11T15:29:06Z" } ], "number": 16583, "title": "'size' parameter ignored when executing a function_score query" }
{ "body": "Function Score Query now checks the type of token that we are parsing, which makes parsing stricter and allows to throw useful errors in case the json is malformed. It also makes code more readable as in what gets parsed when.\n\nCloses #16583\n", "number": 16617, "review_comments": [ { "body": "@javanna would it make sense here and for all JSON parameters below to use a `ParseField` object instead? For example, here you could define a `QUERY_FIELD` of type `ParseField` and make the conditional here `parseContext.parseFieldMatcher.match(currentFieldName, QUERY_FIELD)`\n\nThis would also allow you to specify camel case variants like you are checking for some of the fields below.\n", "created_at": "2016-02-12T15:22:42Z" }, { "body": "yes it would but I didn't want to pollute this PR, I think the change is already hard enough to read :)\n", "created_at": "2016-02-12T15:27:45Z" }, { "body": "this could be `assertNull`\n", "created_at": "2016-02-12T15:30:26Z" } ], "title": "Function Score Query: make parsing stricter" }
{ "commits": [ { "message": "Function Score Query: make parsing stricter\n\nFunction Score Query now checks the type of token that we are parsing, which makes parsing stricter and allows to throw useful errors in case the json is malformed. It also makes code more readable as in what gets parsed when.\n\nCloses #16583" }, { "message": "[TEST] make SearchSourceBuilderTests pickier, check that the search source has been read completely\n\nSame check is already performed in AbstractQueryTestCase, makes sense to have it here too." }, { "message": "use assertNull rather than assertTrue(object == null)" }, { "message": "move function score to ParseField" } ], "files": [ { "diff": "@@ -197,22 +197,22 @@ public float maxBoost() {\n protected void doXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject(NAME);\n if (query != null) {\n- builder.field(\"query\");\n+ builder.field(FunctionScoreQueryParser.QUERY_FIELD.getPreferredName());\n query.toXContent(builder, params);\n }\n- builder.startArray(\"functions\");\n+ builder.startArray(FunctionScoreQueryParser.FUNCTIONS_FIELD.getPreferredName());\n for (FilterFunctionBuilder filterFunctionBuilder : filterFunctionBuilders) {\n filterFunctionBuilder.toXContent(builder, params);\n }\n builder.endArray();\n \n- builder.field(\"score_mode\", scoreMode.name().toLowerCase(Locale.ROOT));\n+ builder.field(FunctionScoreQueryParser.SCORE_MODE_FIELD.getPreferredName(), scoreMode.name().toLowerCase(Locale.ROOT));\n if (boostMode != null) {\n- builder.field(\"boost_mode\", boostMode.name().toLowerCase(Locale.ROOT));\n+ builder.field(FunctionScoreQueryParser.BOOST_MODE_FIELD.getPreferredName(), boostMode.name().toLowerCase(Locale.ROOT));\n }\n- builder.field(\"max_boost\", maxBoost);\n+ builder.field(FunctionScoreQueryParser.MAX_BOOST_FIELD.getPreferredName(), maxBoost);\n if (minScore != null) {\n- builder.field(\"min_score\", minScore);\n+ builder.field(FunctionScoreQueryParser.MIN_SCORE_FIELD.getPreferredName(), minScore);\n }\n printBoostAndQueryName(builder);\n builder.endObject();\n@@ -358,7 +358,7 @@ public ScoreFunctionBuilder<?> getScoreFunction() {\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject();\n- builder.field(\"filter\");\n+ builder.field(FunctionScoreQueryParser.FILTER_FIELD.getPreferredName());\n filter.toXContent(builder, params);\n scoreFunction.toXContent(builder, params);\n builder.endObject();", "filename": "core/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilder.java", "status": "modified" }, { "diff": "@@ -19,10 +19,6 @@\n \n package org.elasticsearch.index.query.functionscore;\n \n-import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.List;\n-\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.Strings;\n@@ -39,6 +35,10 @@\n import org.elasticsearch.index.query.QueryParser;\n import org.elasticsearch.index.query.functionscore.weight.WeightBuilder;\n \n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.List;\n+\n /**\n * Parser for function_score query\n */\n@@ -50,6 +50,13 @@ public class FunctionScoreQueryParser implements QueryParser<FunctionScoreQueryB\n static final String MISPLACED_FUNCTION_MESSAGE_PREFIX = \"you can either define [functions] array or a single function, not both. \";\n \n public static final ParseField WEIGHT_FIELD = new ParseField(\"weight\");\n+ public static final ParseField QUERY_FIELD = new ParseField(\"query\");\n+ public static final ParseField FILTER_FIELD = new ParseField(\"filter\");\n+ public static final ParseField FUNCTIONS_FIELD = new ParseField(\"functions\");\n+ public static final ParseField SCORE_MODE_FIELD = new ParseField(\"score_mode\");\n+ public static final ParseField BOOST_MODE_FIELD = new ParseField(\"boost_mode\");\n+ public static final ParseField MAX_BOOST_FIELD = new ParseField(\"max_boost\");\n+ public static final ParseField MIN_SCORE_FIELD = new ParseField(\"min_score\");\n \n private final ScoreFunctionParserMapper functionParserMapper;\n \n@@ -86,48 +93,69 @@ public FunctionScoreQueryBuilder fromXContent(QueryParseContext parseContext) th\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n- } else if (\"query\".equals(currentFieldName)) {\n- query = parseContext.parseInnerQueryBuilder();\n- } else if (\"score_mode\".equals(currentFieldName) || \"scoreMode\".equals(currentFieldName)) {\n- scoreMode = FiltersFunctionScoreQuery.ScoreMode.fromString(parser.text());\n- } else if (\"boost_mode\".equals(currentFieldName) || \"boostMode\".equals(currentFieldName)) {\n- combineFunction = CombineFunction.fromString(parser.text());\n- } else if (\"max_boost\".equals(currentFieldName) || \"maxBoost\".equals(currentFieldName)) {\n- maxBoost = parser.floatValue();\n- } else if (\"boost\".equals(currentFieldName)) {\n- boost = parser.floatValue();\n- } else if (\"_name\".equals(currentFieldName)) {\n- queryName = parser.text();\n- } else if (\"min_score\".equals(currentFieldName) || \"minScore\".equals(currentFieldName)) {\n- minScore = parser.floatValue();\n- } else if (\"functions\".equals(currentFieldName)) {\n- if (singleFunctionFound) {\n- String errorString = \"already found [\" + singleFunctionName + \"], now encountering [functions].\";\n- handleMisplacedFunctionsDeclaration(parser.getTokenLocation(), errorString);\n- }\n- functionArrayFound = true;\n- currentFieldName = parseFiltersAndFunctions(parseContext, parser, filterFunctionBuilders);\n- } else {\n- if (singleFunctionFound) {\n- throw new ParsingException(parser.getTokenLocation(), \"failed to parse [{}] query. already found function [{}], now encountering [{}]. use [functions] array if you want to define several functions.\", FunctionScoreQueryBuilder.NAME, singleFunctionName, currentFieldName);\n+ } else if (token == XContentParser.Token.START_OBJECT) {\n+ if (parseContext.parseFieldMatcher().match(currentFieldName, QUERY_FIELD)) {\n+ if (query != null) {\n+ throw new ParsingException(parser.getTokenLocation(), \"failed to parse [{}] query. [query] is already defined.\", FunctionScoreQueryBuilder.NAME);\n+ }\n+ query = parseContext.parseInnerQueryBuilder();\n+ } else {\n+ if (singleFunctionFound) {\n+ throw new ParsingException(parser.getTokenLocation(), \"failed to parse [{}] query. already found function [{}], now encountering [{}]. use [functions] array if you want to define several functions.\", FunctionScoreQueryBuilder.NAME, singleFunctionName, currentFieldName);\n+ }\n+ if (functionArrayFound) {\n+ String errorString = \"already found [functions] array, now encountering [\" + currentFieldName + \"].\";\n+ handleMisplacedFunctionsDeclaration(parser.getTokenLocation(), errorString);\n+ }\n+ singleFunctionFound = true;\n+ singleFunctionName = currentFieldName;\n+\n+ // we try to parse a score function. If there is no score function for the current field name,\n+ // functionParserMapper.get() may throw an Exception.\n+ ScoreFunctionBuilder<?> scoreFunction = functionParserMapper.get(parser.getTokenLocation(), currentFieldName).fromXContent(parseContext, parser);\n+ filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(scoreFunction));\n }\n- if (functionArrayFound) {\n- String errorString = \"already found [functions] array, now encountering [\" + currentFieldName + \"].\";\n- handleMisplacedFunctionsDeclaration(parser.getTokenLocation(), errorString);\n+ } else if (token == XContentParser.Token.START_ARRAY) {\n+ if (parseContext.parseFieldMatcher().match(currentFieldName, FUNCTIONS_FIELD)) {\n+ if (singleFunctionFound) {\n+ String errorString = \"already found [\" + singleFunctionName + \"], now encountering [functions].\";\n+ handleMisplacedFunctionsDeclaration(parser.getTokenLocation(), errorString);\n+ }\n+ functionArrayFound = true;\n+ currentFieldName = parseFiltersAndFunctions(parseContext, parser, filterFunctionBuilders);\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(), \"failed to parse [{}] query. array [{}] is not supported\", FunctionScoreQueryBuilder.NAME, currentFieldName);\n }\n- singleFunctionFound = true;\n- singleFunctionName = currentFieldName;\n \n- ScoreFunctionBuilder<?> scoreFunction;\n- if (parseContext.parseFieldMatcher().match(currentFieldName, WEIGHT_FIELD)) {\n- scoreFunction = new WeightBuilder().setWeight(parser.floatValue());\n+ } else if (token.isValue()) {\n+ if (parseContext.parseFieldMatcher().match(currentFieldName, SCORE_MODE_FIELD)) {\n+ scoreMode = FiltersFunctionScoreQuery.ScoreMode.fromString(parser.text());\n+ } else if (parseContext.parseFieldMatcher().match(currentFieldName, BOOST_MODE_FIELD)) {\n+ combineFunction = CombineFunction.fromString(parser.text());\n+ } else if (parseContext.parseFieldMatcher().match(currentFieldName, MAX_BOOST_FIELD)) {\n+ maxBoost = parser.floatValue();\n+ } else if (parseContext.parseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) {\n+ boost = parser.floatValue();\n+ } else if (parseContext.parseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) {\n+ queryName = parser.text();\n+ } else if (parseContext.parseFieldMatcher().match(currentFieldName, MIN_SCORE_FIELD)) {\n+ minScore = parser.floatValue();\n } else {\n- // we try to parse a score function. If there is no score\n- // function for the current field name,\n- // functionParserMapper.get() will throw an Exception.\n- scoreFunction = functionParserMapper.get(parser.getTokenLocation(), currentFieldName).fromXContent(parseContext, parser);\n+ if (singleFunctionFound) {\n+ throw new ParsingException(parser.getTokenLocation(), \"failed to parse [{}] query. already found function [{}], now encountering [{}]. use [functions] array if you want to define several functions.\", FunctionScoreQueryBuilder.NAME, singleFunctionName, currentFieldName);\n+ }\n+ if (functionArrayFound) {\n+ String errorString = \"already found [functions] array, now encountering [\" + currentFieldName + \"].\";\n+ handleMisplacedFunctionsDeclaration(parser.getTokenLocation(), errorString);\n+ }\n+ if (parseContext.parseFieldMatcher().match(currentFieldName, WEIGHT_FIELD)) {\n+ filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(new WeightBuilder().setWeight(parser.floatValue())));\n+ singleFunctionFound = true;\n+ singleFunctionName = currentFieldName;\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(), \"failed to parse [{}] query. field [{}] is not supported\", FunctionScoreQueryBuilder.NAME, currentFieldName);\n+ }\n }\n- filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(scoreFunction));\n }\n }\n \n@@ -167,21 +195,23 @@ private String parseFiltersAndFunctions(QueryParseContext parseContext, XContent\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n- } else if (parseContext.parseFieldMatcher().match(currentFieldName, WEIGHT_FIELD)) {\n- functionWeight = parser.floatValue();\n- } else {\n- if (\"filter\".equals(currentFieldName)) {\n+ } else if (token == XContentParser.Token.START_OBJECT) {\n+ if (parseContext.parseFieldMatcher().match(currentFieldName, FILTER_FIELD)) {\n filter = parseContext.parseInnerQueryBuilder();\n } else {\n if (scoreFunction != null) {\n throw new ParsingException(parser.getTokenLocation(), \"failed to parse function_score functions. already found [{}], now encountering [{}].\", scoreFunction.getName(), currentFieldName);\n }\n- // do not need to check null here,\n- // functionParserMapper throws exception if parser\n- // non-existent\n+ // do not need to check null here, functionParserMapper does it already\n ScoreFunctionParser functionParser = functionParserMapper.get(parser.getTokenLocation(), currentFieldName);\n scoreFunction = functionParser.fromXContent(parseContext, parser);\n }\n+ } else if (token.isValue()) {\n+ if (parseContext.parseFieldMatcher().match(currentFieldName, WEIGHT_FIELD)) {\n+ functionWeight = parser.floatValue();\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(), \"failed to parse [{}] query. field [{}] is not supported\", FunctionScoreQueryBuilder.NAME, currentFieldName);\n+ }\n }\n }\n if (functionWeight != null) {", "filename": "core/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryParser.java", "status": "modified" }, { "diff": "@@ -501,7 +501,7 @@ private QueryBuilder<?> parseQuery(XContentParser parser, ParseFieldMatcher matc\n context.reset(parser);\n context.parseFieldMatcher(matcher);\n QueryBuilder<?> parseInnerQueryBuilder = context.parseInnerQueryBuilder();\n- assertTrue(parser.nextToken() == null);\n+ assertNull(parser.nextToken());\n return parseInnerQueryBuilder;\n }\n ", "filename": "core/src/test/java/org/elasticsearch/index/query/AbstractQueryTestCase.java", "status": "modified" }, { "diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.index.query.functionscore;\n \n import com.fasterxml.jackson.core.JsonParseException;\n-\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.Query;\n@@ -59,7 +58,6 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery;\n import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n-import static org.elasticsearch.test.StreamsUtils.copyToStringFromClasspath;\n import static org.hamcrest.Matchers.closeTo;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.either;\n@@ -72,7 +70,7 @@ public class FunctionScoreQueryBuilderTests extends AbstractQueryTestCase<Functi\n @Override\n protected FunctionScoreQueryBuilder doCreateTestQueryBuilder() {\n FunctionScoreQueryBuilder functionScoreQueryBuilder;\n- switch(randomIntBetween(0, 3)) {\n+ switch (randomIntBetween(0, 3)) {\n case 0:\n int numFunctions = randomIntBetween(0, 3);\n FunctionScoreQueryBuilder.FilterFunctionBuilder[] filterFunctionBuilders = new FunctionScoreQueryBuilder.FilterFunctionBuilder[numFunctions];\n@@ -124,7 +122,7 @@ private static ScoreFunctionBuilder randomScoreFunction() {\n DecayFunctionBuilder decayFunctionBuilder;\n Float offset = randomBoolean() ? null : randomFloat();\n double decay = randomDouble();\n- switch(randomIntBetween(0, 2)) {\n+ switch (randomIntBetween(0, 2)) {\n case 0:\n decayFunctionBuilder = new GaussDecayFunctionBuilder(INT_FIELD_NAME, randomFloat(), randomFloat(), offset, decay);\n break;\n@@ -164,7 +162,7 @@ private static ScoreFunctionBuilder randomScoreFunction() {\n RandomScoreFunctionBuilder randomScoreFunctionBuilder = new RandomScoreFunctionBuilder();\n if (randomBoolean()) {\n randomScoreFunctionBuilder.seed(randomLong());\n- } else if(randomBoolean()) {\n+ } else if (randomBoolean()) {\n randomScoreFunctionBuilder.seed(randomInt());\n } else {\n randomScoreFunctionBuilder.seed(randomAsciiOfLengthBetween(1, 10));\n@@ -198,140 +196,140 @@ public void testToQuery() throws IOException {\n \n public void testIllegalArguments() {\n try {\n- new FunctionScoreQueryBuilder((QueryBuilder<?>)null);\n+ new FunctionScoreQueryBuilder((QueryBuilder<?>) null);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n- new FunctionScoreQueryBuilder((ScoreFunctionBuilder)null);\n+ new FunctionScoreQueryBuilder((ScoreFunctionBuilder) null);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n- new FunctionScoreQueryBuilder((FunctionScoreQueryBuilder.FilterFunctionBuilder[])null);\n+ new FunctionScoreQueryBuilder((FunctionScoreQueryBuilder.FilterFunctionBuilder[]) null);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n new FunctionScoreQueryBuilder(null, ScoreFunctionBuilders.randomFunction(123));\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n- new FunctionScoreQueryBuilder(new MatchAllQueryBuilder(), (ScoreFunctionBuilder)null);\n+ new FunctionScoreQueryBuilder(new MatchAllQueryBuilder(), (ScoreFunctionBuilder) null);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n- new FunctionScoreQueryBuilder(new MatchAllQueryBuilder(), (FunctionScoreQueryBuilder.FilterFunctionBuilder[])null);\n+ new FunctionScoreQueryBuilder(new MatchAllQueryBuilder(), (FunctionScoreQueryBuilder.FilterFunctionBuilder[]) null);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n new FunctionScoreQueryBuilder(null, new FunctionScoreQueryBuilder.FilterFunctionBuilder[0]);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n new FunctionScoreQueryBuilder(QueryBuilders.matchAllQuery(), new FunctionScoreQueryBuilder.FilterFunctionBuilder[]{null});\n fail(\"content of array must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n new FunctionScoreQueryBuilder.FilterFunctionBuilder(null);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n new FunctionScoreQueryBuilder.FilterFunctionBuilder(null, ScoreFunctionBuilders.randomFunction(123));\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n new FunctionScoreQueryBuilder.FilterFunctionBuilder(new MatchAllQueryBuilder(), null);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n new FunctionScoreQueryBuilder(new MatchAllQueryBuilder()).scoreMode(null);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n \n try {\n new FunctionScoreQueryBuilder(new MatchAllQueryBuilder()).boostMode(null);\n fail(\"must not be null\");\n- } catch(IllegalArgumentException e) {\n+ } catch (IllegalArgumentException e) {\n //all good\n }\n }\n \n public void testParseFunctionsArray() throws IOException {\n String functionScoreQuery = \"{\\n\" +\n- \" \\\"function_score\\\":{\\n\" +\n- \" \\\"query\\\":{\\n\" +\n- \" \\\"term\\\":{\\n\" +\n- \" \\\"field1\\\":\\\"value1\\\"\\n\" +\n- \" }\\n\" +\n- \" },\\n\" +\n- \" \\\"functions\\\": [\\n\" +\n- \" {\\n\" +\n- \" \\\"random_score\\\": {\\n\" +\n- \" \\\"seed\\\":123456\\n\" +\n- \" },\\n\" +\n- \" \\\"weight\\\": 3,\\n\" +\n- \" \\\"filter\\\": {\\n\" +\n- \" \\\"term\\\":{\\n\" +\n- \" \\\"field2\\\":\\\"value2\\\"\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \" },\\n\" +\n- \" {\\n\" +\n- \" \\\"filter\\\": {\\n\" +\n- \" \\\"term\\\":{\\n\" +\n- \" \\\"field3\\\":\\\"value3\\\"\\n\" +\n- \" }\\n\" +\n- \" },\\n\" +\n- \" \\\"weight\\\": 9\\n\" +\n- \" },\\n\" +\n- \" {\\n\" +\n- \" \\\"gauss\\\": {\\n\" +\n- \" \\\"field_name\\\": {\\n\" +\n- \" \\\"origin\\\":0.5,\\n\" +\n- \" \\\"scale\\\":0.6\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \" ],\\n\" +\n- \" \\\"boost\\\" : 3,\\n\" +\n- \" \\\"score_mode\\\" : \\\"avg\\\",\\n\" +\n- \" \\\"boost_mode\\\" : \\\"replace\\\",\\n\" +\n- \" \\\"max_boost\\\" : 10\\n\" +\n- \" }\\n\" +\n- \"}\";\n+ \" \\\"function_score\\\":{\\n\" +\n+ \" \\\"query\\\":{\\n\" +\n+ \" \\\"term\\\":{\\n\" +\n+ \" \\\"field1\\\":\\\"value1\\\"\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" \\\"functions\\\": [\\n\" +\n+ \" {\\n\" +\n+ \" \\\"random_score\\\": {\\n\" +\n+ \" \\\"seed\\\":123456\\n\" +\n+ \" },\\n\" +\n+ \" \\\"weight\\\": 3,\\n\" +\n+ \" \\\"filter\\\": {\\n\" +\n+ \" \\\"term\\\":{\\n\" +\n+ \" \\\"field2\\\":\\\"value2\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" {\\n\" +\n+ \" \\\"filter\\\": {\\n\" +\n+ \" \\\"term\\\":{\\n\" +\n+ \" \\\"field3\\\":\\\"value3\\\"\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" \\\"weight\\\": 9\\n\" +\n+ \" },\\n\" +\n+ \" {\\n\" +\n+ \" \\\"gauss\\\": {\\n\" +\n+ \" \\\"field_name\\\": {\\n\" +\n+ \" \\\"origin\\\":0.5,\\n\" +\n+ \" \\\"scale\\\":0.6\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" ],\\n\" +\n+ \" \\\"boost\\\" : 3,\\n\" +\n+ \" \\\"score_mode\\\" : \\\"avg\\\",\\n\" +\n+ \" \\\"boost_mode\\\" : \\\"replace\\\",\\n\" +\n+ \" \\\"max_boost\\\" : 10\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n \n QueryBuilder<?> queryBuilder = parseQuery(functionScoreQuery);\n //given that we copy part of the decay functions as bytes, we test that fromXContent and toXContent both work no matter what the initial format was\n@@ -368,31 +366,31 @@ public void testParseFunctionsArray() throws IOException {\n assertThat(functionScoreQueryBuilder.maxBoost(), equalTo(10f));\n \n if (i < XContentType.values().length) {\n- queryBuilder = parseQuery(((AbstractQueryBuilder<?>)queryBuilder).buildAsBytes(XContentType.values()[i]));\n+ queryBuilder = parseQuery(((AbstractQueryBuilder<?>) queryBuilder).buildAsBytes(XContentType.values()[i]));\n }\n }\n }\n \n public void testParseSingleFunction() throws IOException {\n String functionScoreQuery = \"{\\n\" +\n- \" \\\"function_score\\\":{\\n\" +\n- \" \\\"query\\\":{\\n\" +\n- \" \\\"term\\\":{\\n\" +\n- \" \\\"field1\\\":\\\"value1\\\"\\n\" +\n- \" }\\n\" +\n- \" },\\n\" +\n- \" \\\"gauss\\\": {\\n\" +\n- \" \\\"field_name\\\": {\\n\" +\n- \" \\\"origin\\\":0.5,\\n\" +\n- \" \\\"scale\\\":0.6\\n\" +\n- \" }\\n\" +\n- \" },\\n\" +\n- \" \\\"boost\\\" : 3,\\n\" +\n- \" \\\"score_mode\\\" : \\\"avg\\\",\\n\" +\n- \" \\\"boost_mode\\\" : \\\"replace\\\",\\n\" +\n- \" \\\"max_boost\\\" : 10\\n\" +\n- \" }\\n\" +\n- \"}\";\n+ \" \\\"function_score\\\":{\\n\" +\n+ \" \\\"query\\\":{\\n\" +\n+ \" \\\"term\\\":{\\n\" +\n+ \" \\\"field1\\\":\\\"value1\\\"\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" \\\"gauss\\\": {\\n\" +\n+ \" \\\"field_name\\\": {\\n\" +\n+ \" \\\"origin\\\":0.5,\\n\" +\n+ \" \\\"scale\\\":0.6\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" \\\"boost\\\" : 3,\\n\" +\n+ \" \\\"score_mode\\\" : \\\"avg\\\",\\n\" +\n+ \" \\\"boost_mode\\\" : \\\"replace\\\",\\n\" +\n+ \" \\\"max_boost\\\" : 10\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n \n QueryBuilder<?> queryBuilder = parseQuery(functionScoreQuery);\n //given that we copy part of the decay functions as bytes, we test that fromXContent and toXContent both work no matter what the initial format was\n@@ -415,95 +413,95 @@ public void testParseSingleFunction() throws IOException {\n assertThat(functionScoreQueryBuilder.maxBoost(), equalTo(10f));\n \n if (i < XContentType.values().length) {\n- queryBuilder = parseQuery(((AbstractQueryBuilder<?>)queryBuilder).buildAsBytes(XContentType.values()[i]));\n+ queryBuilder = parseQuery(((AbstractQueryBuilder<?>) queryBuilder).buildAsBytes(XContentType.values()[i]));\n }\n }\n }\n \n public void testProperErrorMessageWhenTwoFunctionsDefinedInQueryBody() throws IOException {\n //without a functions array, we support only a single function, weight can't be associated with the function either.\n String functionScoreQuery = \"{\\n\" +\n- \" \\\"function_score\\\": {\\n\" +\n- \" \\\"script_score\\\": {\\n\" +\n- \" \\\"script\\\": \\\"5\\\"\\n\" +\n- \" },\\n\" +\n- \" \\\"weight\\\": 2\\n\" +\n- \" }\\n\" +\n- \"}\";\n+ \" \\\"function_score\\\": {\\n\" +\n+ \" \\\"script_score\\\": {\\n\" +\n+ \" \\\"script\\\": \\\"5\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"weight\\\": 2\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n try {\n parseQuery(functionScoreQuery);\n fail(\"parsing should have failed\");\n- } catch(ParsingException e) {\n+ } catch (ParsingException e) {\n assertThat(e.getMessage(), containsString(\"use [functions] array if you want to define several functions.\"));\n }\n }\n \n public void testProperErrorMessageWhenTwoFunctionsDefinedInFunctionsArray() throws IOException {\n String functionScoreQuery = \"{\\n\" +\n- \" \\\"function_score\\\":{\\n\" +\n- \" \\\"functions\\\": [\\n\" +\n- \" {\\n\" +\n- \" \\\"random_score\\\": {\\n\" +\n- \" \\\"seed\\\":123456\\n\" +\n- \" },\\n\" +\n- \" \\\"weight\\\": 3,\\n\" +\n- \" \\\"script_score\\\": {\\n\" +\n- \" \\\"script\\\": \\\"_index['text']['foo'].tf()\\\"\\n\" +\n- \" },\\n\" +\n- \" \\\"filter\\\": {\\n\" +\n- \" \\\"term\\\":{\\n\" +\n- \" \\\"field2\\\":\\\"value2\\\"\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \" ]\\n\" +\n- \" }\\n\" +\n- \"}\";\n+ \" \\\"function_score\\\":{\\n\" +\n+ \" \\\"functions\\\": [\\n\" +\n+ \" {\\n\" +\n+ \" \\\"random_score\\\": {\\n\" +\n+ \" \\\"seed\\\":123456\\n\" +\n+ \" },\\n\" +\n+ \" \\\"weight\\\": 3,\\n\" +\n+ \" \\\"script_score\\\": {\\n\" +\n+ \" \\\"script\\\": \\\"_index['text']['foo'].tf()\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"filter\\\": {\\n\" +\n+ \" \\\"term\\\":{\\n\" +\n+ \" \\\"field2\\\":\\\"value2\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" ]\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n \n try {\n parseQuery(functionScoreQuery);\n fail(\"parsing should have failed\");\n- } catch(ParsingException e) {\n+ } catch (ParsingException e) {\n assertThat(e.getMessage(), containsString(\"failed to parse function_score functions. already found [random_score], now encountering [script_score].\"));\n }\n }\n \n public void testProperErrorMessageWhenMissingFunction() throws IOException {\n String functionScoreQuery = \"{\\n\" +\n- \" \\\"function_score\\\":{\\n\" +\n- \" \\\"functions\\\": [\\n\" +\n- \" {\\n\" +\n- \" \\\"filter\\\": {\\n\" +\n- \" \\\"term\\\":{\\n\" +\n- \" \\\"field2\\\":\\\"value2\\\"\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \" ]\\n\" +\n- \" }\\n\" +\n- \"}\";\n+ \" \\\"function_score\\\":{\\n\" +\n+ \" \\\"functions\\\": [\\n\" +\n+ \" {\\n\" +\n+ \" \\\"filter\\\": {\\n\" +\n+ \" \\\"term\\\":{\\n\" +\n+ \" \\\"field2\\\":\\\"value2\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" ]\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n try {\n parseQuery(functionScoreQuery);\n fail(\"parsing should have failed\");\n- } catch(ParsingException e) {\n+ } catch (ParsingException e) {\n assertThat(e.getMessage(), containsString(\"an entry in functions list is missing a function.\"));\n }\n }\n \n public void testWeight1fStillProducesWeightFunction() throws IOException {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n String queryString = jsonBuilder().startObject()\n- .startObject(\"function_score\")\n- .startArray(\"functions\")\n- .startObject()\n- .startObject(\"field_value_factor\")\n- .field(\"field\", INT_FIELD_NAME)\n- .endObject()\n- .field(\"weight\", 1.0)\n- .endObject()\n- .endArray()\n- .endObject()\n- .endObject().string();\n+ .startObject(\"function_score\")\n+ .startArray(\"functions\")\n+ .startObject()\n+ .startObject(\"field_value_factor\")\n+ .field(\"field\", INT_FIELD_NAME)\n+ .endObject()\n+ .field(\"weight\", 1.0)\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .endObject().string();\n QueryBuilder<?> query = parseQuery(queryString);\n assertThat(query, instanceOf(FunctionScoreQueryBuilder.class));\n FunctionScoreQueryBuilder functionScoreQueryBuilder = (FunctionScoreQueryBuilder) query;\n@@ -526,23 +524,23 @@ public void testWeight1fStillProducesWeightFunction() throws IOException {\n \n public void testProperErrorMessagesForMisplacedWeightsAndFunctions() throws IOException {\n String query = jsonBuilder().startObject().startObject(\"function_score\")\n- .startArray(\"functions\")\n- .startObject().startObject(\"script_score\").field(\"script\", \"3\").endObject().endObject()\n- .endArray()\n- .field(\"weight\", 2)\n- .endObject().endObject().string();\n+ .startArray(\"functions\")\n+ .startObject().startObject(\"script_score\").field(\"script\", \"3\").endObject().endObject()\n+ .endArray()\n+ .field(\"weight\", 2)\n+ .endObject().endObject().string();\n try {\n parseQuery(query);\n fail(\"Expect exception here because array of functions and one weight in body is not allowed.\");\n } catch (ParsingException e) {\n assertThat(e.getMessage(), containsString(\"you can either define [functions] array or a single function, not both. already found [functions] array, now encountering [weight].\"));\n }\n query = jsonBuilder().startObject().startObject(\"function_score\")\n- .field(\"weight\", 2)\n- .startArray(\"functions\")\n- .startObject().endObject()\n- .endArray()\n- .endObject().endObject().string();\n+ .field(\"weight\", 2)\n+ .startArray(\"functions\")\n+ .startObject().endObject()\n+ .endArray()\n+ .endObject().endObject().string();\n try {\n parseQuery(query);\n fail(\"Expect exception here because array of functions and one weight in body is not allowed.\");\n@@ -552,8 +550,22 @@ public void testProperErrorMessagesForMisplacedWeightsAndFunctions() throws IOEx\n }\n \n public void testMalformedThrowsException() throws IOException {\n+ String json = \"{\\n\" +\n+ \" \\\"function_score\\\":{\\n\" +\n+ \" \\\"query\\\":{\\n\" +\n+ \" \\\"term\\\":{\\n\" +\n+ \" \\\"name.last\\\":\\\"banon\\\"\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" \\\"functions\\\": [\\n\" +\n+ \" {\\n\" +\n+ \" {\\n\" +\n+ \" }\\n\" +\n+ \" ]\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n try {\n- parseQuery(copyToStringFromClasspath(\"/org/elasticsearch/index/query/faulty-function-score-query.json\"));\n+ parseQuery(json);\n fail(\"Expected JsonParseException\");\n } catch (JsonParseException e) {\n assertThat(e.getMessage(), containsString(\"Unexpected character ('{\"));\n@@ -579,31 +591,31 @@ public void testCustomWeightFactorQueryBuilderWithFunctionScoreWithoutQueryGiven\n public void testFieldValueFactorFactorArray() throws IOException {\n // don't permit an array of factors\n String querySource = \"{\" +\n- \" \\\"function_score\\\": {\" +\n- \" \\\"query\\\": {\" +\n- \" \\\"match\\\": {\\\"name\\\": \\\"foo\\\"}\" +\n- \" },\" +\n- \" \\\"functions\\\": [\" +\n- \" {\" +\n- \" \\\"field_value_factor\\\": {\" +\n- \" \\\"field\\\": \\\"test\\\",\" +\n- \" \\\"factor\\\": [1.2,2]\" +\n- \" }\" +\n- \" }\" +\n- \" ]\" +\n- \" }\" +\n- \"}\";\n+ \" \\\"function_score\\\": {\" +\n+ \" \\\"query\\\": {\" +\n+ \" \\\"match\\\": {\\\"name\\\": \\\"foo\\\"}\" +\n+ \" },\" +\n+ \" \\\"functions\\\": [\" +\n+ \" {\" +\n+ \" \\\"field_value_factor\\\": {\" +\n+ \" \\\"field\\\": \\\"test\\\",\" +\n+ \" \\\"factor\\\": [1.2,2]\" +\n+ \" }\" +\n+ \" }\" +\n+ \" ]\" +\n+ \" }\" +\n+ \"}\";\n try {\n parseQuery(querySource);\n fail(\"parsing should have failed\");\n- } catch(ParsingException e) {\n+ } catch (ParsingException e) {\n assertThat(e.getMessage(), containsString(\"[field_value_factor] field 'factor' does not support lists or objects\"));\n }\n }\n \n public void testFromJson() throws IOException {\n String json =\n- \"{\\n\" +\n+ \"{\\n\" +\n \" \\\"function_score\\\" : {\\n\" +\n \" \\\"query\\\" : { },\\n\" +\n \" \\\"functions\\\" : [ {\\n\" +\n@@ -630,4 +642,79 @@ public void testFromJson() throws IOException {\n assertEquals(json, 100, parsed.maxBoost(), 0.00001);\n assertEquals(json, 1, parsed.getMinScore(), 0.0001);\n }\n+\n+ public void testQueryMalformedArrayNotSupported() throws IOException {\n+ String json =\n+ \"{\\n\" +\n+ \" \\\"function_score\\\" : {\\n\" +\n+ \" \\\"not_supported\\\" : []\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ try {\n+ parseQuery(json);\n+ fail(\"parse should have failed\");\n+ } catch (ParsingException e) {\n+ assertThat(e.getMessage(), containsString(\"array [not_supported] is not supported\"));\n+ }\n+ }\n+\n+ public void testQueryMalformedFieldNotSupported() throws IOException {\n+ String json =\n+ \"{\\n\" +\n+ \" \\\"function_score\\\" : {\\n\" +\n+ \" \\\"not_supported\\\" : \\\"value\\\"\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ try {\n+ parseQuery(json);\n+ fail(\"parse should have failed\");\n+ } catch (ParsingException e) {\n+ assertThat(e.getMessage(), containsString(\"field [not_supported] is not supported\"));\n+ }\n+ }\n+\n+ public void testMalformedQueryFunctionFieldNotSupported() throws IOException {\n+ String json =\n+ \"{\\n\" +\n+ \" \\\"function_score\\\" : {\\n\" +\n+ \" \\\"functions\\\" : [ {\\n\" +\n+ \" \\\"not_supported\\\" : 23.0\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ try {\n+ parseQuery(json);\n+ fail(\"parse should have failed\");\n+ } catch (ParsingException e) {\n+ assertThat(e.getMessage(), containsString(\"field [not_supported] is not supported\"));\n+ }\n+ }\n+\n+ public void testMalformedQuery() throws IOException {\n+ //verify that an error is thrown rather than setting the query twice (https://github.com/elastic/elasticsearch/issues/16583)\n+ String json =\n+ \"{\\n\" +\n+ \" \\\"function_score\\\":{\\n\" +\n+ \" \\\"query\\\":{\\n\" +\n+ \" \\\"bool\\\":{\\n\" +\n+ \" \\\"must\\\":{\\\"match\\\":{\\\"field\\\":\\\"value\\\"}}\" +\n+ \" },\\n\" +\n+ \" \\\"ignored_field_name\\\": {\\n\" +\n+ \" {\\\"match\\\":{\\\"field\\\":\\\"value\\\"}}\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ try {\n+ parseQuery(json);\n+ fail(\"parse should have failed\");\n+ } catch(ParsingException e) {\n+ assertThat(e.getMessage(), containsString(\"[query] is already defined.\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -368,7 +368,7 @@ private void assertParseSearchSource(SearchSourceBuilder testBuilder, BytesRefer\n parser.nextToken(); // sometimes we move it on the START_OBJECT to test the embedded case\n }\n SearchSourceBuilder newBuilder = SearchSourceBuilder.parseSearchSource(parser, parseContext);\n- assertNotSame(testBuilder, newBuilder);\n+ assertNull(parser.nextToken());\n assertEquals(testBuilder, newBuilder);\n assertEquals(testBuilder.hashCode(), newBuilder.hashCode());\n }", "filename": "core/src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTests.java", "status": "modified" } ] }
{ "body": "The issue is: https://github.com/ywelsch/elasticsearch/blob/bef0bedba9e4d3417205eee7a0601eaf9763a831/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java#L289\n\nWhich should be `aliasAction.aliases == null || aliasAction.aliases.length == 0`. \n\nNow for the following request:\n\n```\nPOST /_aliases\n{\n \"actions\": [\n {\n \"add\": {\n \"index\": \"test_index\",\n \"alias\": null,\n \"fdasdilter\": {\n \"termssse\": {\n \"tagss\": \"customerconsent\"\n }\n }\n }\n }\n ]\n}\n```\n\nWe get the following errors:\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"null_pointer_exception\",\n \"reason\": null\n }\n ],\n \"type\": \"null_pointer_exception\",\n \"reason\": null\n },\n \"status\": 500\n}\n```\n\n```\n[2016-02-09 15:56:02,492][INFO ][rest.suppressed ] /_aliases Params: {}\njava.lang.NullPointerException\n at org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.validate(IndicesAliasesRequest.java:289)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:62)\n at org.elasticsearch.client.node.\n```\n", "comments": [ { "body": "This was introduced in #15305\n", "created_at": "2016-02-09T18:06:24Z" } ], "number": 16549, "title": "NullPointerException when ALIAS is missing" }
{ "body": "Fixes 2 issues with the REST `_aliases` endpoint:\n\n`POST /_aliases` ignores the `filter` when filtered aliases are created: Closes #16549\n`POST /_aliases` throws NullPointerException instead of validation error with proper error message when `alias` is specified as `null`: Closes #16547\n", "number": 16553, "review_comments": [], "title": "Fix _aliases filter and null parameters" }
{ "commits": [ { "message": "Fix filters and null parameters in _aliases command\n\nCloses #16549\nCloses #16547" } ], "files": [ { "diff": "@@ -286,24 +286,25 @@ public ActionRequestValidationException validate() {\n return addValidationError(\"Must specify at least one alias action\", validationException);\n }\n for (AliasActions aliasAction : allAliasActions) {\n- if (aliasAction.aliases.length == 0) {\n+ if (CollectionUtils.isEmpty(aliasAction.aliases)) {\n validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: aliases may not be empty\", validationException);\n- }\n- for (String alias : aliasAction.aliases) {\n- if (!Strings.hasText(alias)) {\n- validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: [alias] may not be empty string\", validationException);\n+ + \"]: Property [alias/aliases] is either missing or null\", validationException);\n+ } else {\n+ for (String alias : aliasAction.aliases) {\n+ if (!Strings.hasText(alias)) {\n+ validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n+ + \"]: [alias/aliases] may not be empty string\", validationException);\n+ }\n }\n }\n if (CollectionUtils.isEmpty(aliasAction.indices)) {\n validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: Property [index] was either missing or null\", validationException);\n+ + \"]: Property [index/indices] is either missing or null\", validationException);\n } else {\n for (String index : aliasAction.indices) {\n if (!Strings.hasText(index)) {\n validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: [index] may not be empty string\", validationException);\n+ + \"]: [index/indices] may not be empty string\", validationException);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java", "status": "modified" }, { "diff": "@@ -133,7 +133,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n }\n \n if (type == AliasAction.Type.ADD) {\n- AliasActions aliasActions = new AliasActions(type, indices, aliases);\n+ AliasActions aliasActions = new AliasActions(type, indices, aliases).filter(filter);\n if (routingSet) {\n aliasActions.routing(routing);\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/alias/RestIndicesAliasesAction.java", "status": "modified" }, { "diff": "@@ -21,6 +21,8 @@\n \n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n+import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;\n+import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequestBuilder;\n import org.elasticsearch.action.admin.indices.alias.exists.AliasesExistResponse;\n import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse;\n@@ -54,6 +56,8 @@\n \n import static org.elasticsearch.client.Requests.createIndexRequest;\n import static org.elasticsearch.client.Requests.indexRequest;\n+import static org.elasticsearch.cluster.metadata.AliasAction.Type.ADD;\n+import static org.elasticsearch.cluster.metadata.AliasAction.Type.REMOVE;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_METADATA_BLOCK;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_READ_ONLY_BLOCK;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_BLOCKS_METADATA;\n@@ -588,7 +592,7 @@ public void testIndicesGetAliases() throws Exception {\n .addAlias(\"foobar\", \"foo\"));\n \n assertAcked(admin().indices().prepareAliases()\n- .addAliasAction(new AliasAction(AliasAction.Type.ADD, \"foobar\", \"bac\").routing(\"bla\")));\n+ .addAliasAction(new AliasAction(ADD, \"foobar\", \"bac\").routing(\"bla\")));\n \n logger.info(\"--> getting bar and baz for index bazbar\");\n getResponse = admin().indices().prepareGetAliases(\"bar\", \"bac\").addIndices(\"bazbar\").get();\n@@ -724,8 +728,8 @@ public void testAddAliasNullWithoutExistingIndices() {\n assertAcked(admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(null, \"alias1\")));\n fail(\"create alias should have failed due to null index\");\n } catch (IllegalArgumentException e) {\n- assertThat(\"Exception text does not contain \\\"Alias action [add]: [index] may not be empty string\\\"\",\n- e.getMessage(), containsString(\"Alias action [add]: [index] may not be empty string\"));\n+ assertThat(\"Exception text does not contain \\\"Alias action [add]: [index/indices] may not be empty string\\\"\",\n+ e.getMessage(), containsString(\"Alias action [add]: [index/indices] may not be empty string\"));\n }\n }\n \n@@ -740,8 +744,8 @@ public void testAddAliasNullWithExistingIndices() throws Exception {\n assertAcked(admin().indices().prepareAliases().addAlias((String) null, \"empty-alias\"));\n fail(\"create alias should have failed due to null index\");\n } catch (IllegalArgumentException e) {\n- assertThat(\"Exception text does not contain \\\"Alias action [add]: [index] may not be empty string\\\"\",\n- e.getMessage(), containsString(\"Alias action [add]: [index] may not be empty string\"));\n+ assertThat(\"Exception text does not contain \\\"Alias action [add]: [index/indices] may not be empty string\\\"\",\n+ e.getMessage(), containsString(\"Alias action [add]: [index/indices] may not be empty string\"));\n }\n }\n \n@@ -750,7 +754,13 @@ public void testAddAliasEmptyIndex() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(\"\", \"alias1\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[index] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"\", \"alias1\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n }\n }\n \n@@ -759,7 +769,19 @@ public void testAddAliasNullAlias() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(\"index1\", null)).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[alias] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"index1\", (String)null)).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"index1\", (String[])null)).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] is either missing or null\"));\n }\n }\n \n@@ -768,7 +790,13 @@ public void testAddAliasEmptyAlias() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(\"index1\", \"\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[alias] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"index1\", \"\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n }\n }\n \n@@ -780,6 +808,13 @@ public void testAddAliasNullAliasNullIndex() {\n assertThat(e.validationErrors(), notNullValue());\n assertThat(e.validationErrors().size(), equalTo(2));\n }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, null, (String)null)).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n }\n \n public void testAddAliasEmptyAliasEmptyIndex() {\n@@ -790,14 +825,27 @@ public void testAddAliasEmptyAliasEmptyIndex() {\n assertThat(e.validationErrors(), notNullValue());\n assertThat(e.validationErrors().size(), equalTo(2));\n }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"\", \"\")).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n }\n \n public void testRemoveAliasNullIndex() {\n try {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newRemoveAliasAction(null, \"alias1\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[index] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, null, \"alias1\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n }\n }\n \n@@ -806,7 +854,13 @@ public void testRemoveAliasEmptyIndex() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newRemoveAliasAction(\"\", \"alias1\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[index] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"\", \"alias1\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n }\n }\n \n@@ -815,7 +869,19 @@ public void testRemoveAliasNullAlias() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newRemoveAliasAction(\"index1\", null)).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[alias] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"index1\", (String)null)).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"index1\", (String[])null)).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] is either missing or null\"));\n }\n }\n \n@@ -824,7 +890,13 @@ public void testRemoveAliasEmptyAlias() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newRemoveAliasAction(\"index1\", \"\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[alias] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"index1\", \"\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n }\n }\n \n@@ -836,6 +908,20 @@ public void testRemoveAliasNullAliasNullIndex() {\n assertThat(e.validationErrors(), notNullValue());\n assertThat(e.validationErrors().size(), equalTo(2));\n }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, null, (String)null)).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, (String[])null, (String[])null)).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n }\n \n public void testRemoveAliasEmptyAliasEmptyIndex() {\n@@ -846,6 +932,13 @@ public void testRemoveAliasEmptyAliasEmptyIndex() {\n assertThat(e.validationErrors(), notNullValue());\n assertThat(e.validationErrors().size(), equalTo(2));\n }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"\", \"\")).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n }\n \n public void testGetAllAliasesWorks() {", "filename": "core/src/test/java/org/elasticsearch/aliases/IndexAliasesIT.java", "status": "modified" }, { "diff": "@@ -19,6 +19,9 @@\n index: test_index\n alias: test_alias\n routing: routing_value\n+ filter:\n+ ids:\n+ values: [\"1\", \"2\", \"3\"]\n \n - do:\n indices.exists_alias:\n@@ -31,7 +34,7 @@\n index: test_index\n name: test_alias\n \n- - match: {test_index.aliases.test_alias: {'index_routing': 'routing_value', 'search_routing': 'routing_value'}}\n+ - match: {test_index.aliases.test_alias: {filter: { ids : { values: [\"1\", \"2\", \"3\"]}}, 'index_routing': 'routing_value', 'search_routing': 'routing_value'}}\n \n ---\n \"Basic test for multiple aliases\":", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.update_aliases/10_basic.yaml", "status": "modified" } ] }
{ "body": "This started to occur in ES v2.2.0. Easy steps to repro:\n1. Crete and index\n2. Put an alias\n3. Display the alias\n4. Check that the filter is not presented\n\n```\n\nPUT test_index\n{\n \"mappings\": {\n \"document_type\": {\n \"properties\": {\n \"tags\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\",\n \"fields\": {\n \"analyzed\": {\n \"type\": \"string\"\n }\n },\n \"ignore_above\": 256\n }\n }\n }\n }\n}\n\nPOST /_aliases\n{\n \"actions\": [\n {\n \"add\": {\n \"index\": \"test_in*\", <----- with wildcards\n \"alias\": \"test_index_filter\",\n \"filter\": {\n \"term\": {\n \"tags\": \"customerconsent\"\n }\n }\n }\n }\n ]\n}\n\nOR\n\nPOST /_aliases\n{\n \"actions\": [\n {\n \"add\": {\n \"index\": \"test_index\", <------ without wildcards \n \"alias\": \"test_index_filter\",\n \"filter\": {\n \"term\": {\n \"tags\": \"customerconsent\"\n }\n }\n }\n }\n ]\n}\n\nGET _aliases\n```\n\nResult\n\n```\n{\n \"test_index\": {\n \"aliases\": {\n \"test_index_filter\": {}\n }\n }\n}\n```\n", "comments": [ { "body": "Creating an index and an alias at the same time work though:\n\n```\nPUT test_index\n{\n \"mappings\": {\n \"document_type\": {\n \"properties\": {\n \"tags\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\",\n \"fields\": {\n \"analyzed\": {\n \"type\": \"string\"\n }\n },\n \"ignore_above\": 256\n }\n }\n }\n },\n \"aliases\": {\n \"test_index_filter\": {\n \"filter\": {\n \"term\": {\n \"tags\": \"customerconsent\"\n }\n }\n }\n }\n}\n```\n\n```\n \"test_index\": {\n \"aliases\": {\n \"test_index_filter\": {\n \"filter\": {\n \"term\": {\n \"tags\": \"customerconsent\"\n }\n }\n }\n }\n }\n```\n", "created_at": "2016-02-09T17:48:30Z" }, { "body": "The bug is here https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/rest/action/admin/indices/alias/RestIndicesAliasesAction.java#L136\n\nNow the code looks like:\n\n```\nAliasActions aliasActions = new AliasActions(type, indices, aliases);\n```\n\nPreviously it was:\n\n```\nAliasAction aliasAction = newAddAliasAction(index, alias).filter(filter);\n```\n\nThe `.filter(filter)` is missing!\n", "created_at": "2016-02-09T18:15:30Z" } ], "number": 16547, "title": "Filtered Aliases don't work" }
{ "body": "Fixes 2 issues with the REST `_aliases` endpoint:\n\n`POST /_aliases` ignores the `filter` when filtered aliases are created: Closes #16549\n`POST /_aliases` throws NullPointerException instead of validation error with proper error message when `alias` is specified as `null`: Closes #16547\n", "number": 16553, "review_comments": [], "title": "Fix _aliases filter and null parameters" }
{ "commits": [ { "message": "Fix filters and null parameters in _aliases command\n\nCloses #16549\nCloses #16547" } ], "files": [ { "diff": "@@ -286,24 +286,25 @@ public ActionRequestValidationException validate() {\n return addValidationError(\"Must specify at least one alias action\", validationException);\n }\n for (AliasActions aliasAction : allAliasActions) {\n- if (aliasAction.aliases.length == 0) {\n+ if (CollectionUtils.isEmpty(aliasAction.aliases)) {\n validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: aliases may not be empty\", validationException);\n- }\n- for (String alias : aliasAction.aliases) {\n- if (!Strings.hasText(alias)) {\n- validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: [alias] may not be empty string\", validationException);\n+ + \"]: Property [alias/aliases] is either missing or null\", validationException);\n+ } else {\n+ for (String alias : aliasAction.aliases) {\n+ if (!Strings.hasText(alias)) {\n+ validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n+ + \"]: [alias/aliases] may not be empty string\", validationException);\n+ }\n }\n }\n if (CollectionUtils.isEmpty(aliasAction.indices)) {\n validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: Property [index] was either missing or null\", validationException);\n+ + \"]: Property [index/indices] is either missing or null\", validationException);\n } else {\n for (String index : aliasAction.indices) {\n if (!Strings.hasText(index)) {\n validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: [index] may not be empty string\", validationException);\n+ + \"]: [index/indices] may not be empty string\", validationException);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java", "status": "modified" }, { "diff": "@@ -133,7 +133,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n }\n \n if (type == AliasAction.Type.ADD) {\n- AliasActions aliasActions = new AliasActions(type, indices, aliases);\n+ AliasActions aliasActions = new AliasActions(type, indices, aliases).filter(filter);\n if (routingSet) {\n aliasActions.routing(routing);\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/alias/RestIndicesAliasesAction.java", "status": "modified" }, { "diff": "@@ -21,6 +21,8 @@\n \n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n+import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;\n+import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequestBuilder;\n import org.elasticsearch.action.admin.indices.alias.exists.AliasesExistResponse;\n import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse;\n@@ -54,6 +56,8 @@\n \n import static org.elasticsearch.client.Requests.createIndexRequest;\n import static org.elasticsearch.client.Requests.indexRequest;\n+import static org.elasticsearch.cluster.metadata.AliasAction.Type.ADD;\n+import static org.elasticsearch.cluster.metadata.AliasAction.Type.REMOVE;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_METADATA_BLOCK;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_READ_ONLY_BLOCK;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_BLOCKS_METADATA;\n@@ -588,7 +592,7 @@ public void testIndicesGetAliases() throws Exception {\n .addAlias(\"foobar\", \"foo\"));\n \n assertAcked(admin().indices().prepareAliases()\n- .addAliasAction(new AliasAction(AliasAction.Type.ADD, \"foobar\", \"bac\").routing(\"bla\")));\n+ .addAliasAction(new AliasAction(ADD, \"foobar\", \"bac\").routing(\"bla\")));\n \n logger.info(\"--> getting bar and baz for index bazbar\");\n getResponse = admin().indices().prepareGetAliases(\"bar\", \"bac\").addIndices(\"bazbar\").get();\n@@ -724,8 +728,8 @@ public void testAddAliasNullWithoutExistingIndices() {\n assertAcked(admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(null, \"alias1\")));\n fail(\"create alias should have failed due to null index\");\n } catch (IllegalArgumentException e) {\n- assertThat(\"Exception text does not contain \\\"Alias action [add]: [index] may not be empty string\\\"\",\n- e.getMessage(), containsString(\"Alias action [add]: [index] may not be empty string\"));\n+ assertThat(\"Exception text does not contain \\\"Alias action [add]: [index/indices] may not be empty string\\\"\",\n+ e.getMessage(), containsString(\"Alias action [add]: [index/indices] may not be empty string\"));\n }\n }\n \n@@ -740,8 +744,8 @@ public void testAddAliasNullWithExistingIndices() throws Exception {\n assertAcked(admin().indices().prepareAliases().addAlias((String) null, \"empty-alias\"));\n fail(\"create alias should have failed due to null index\");\n } catch (IllegalArgumentException e) {\n- assertThat(\"Exception text does not contain \\\"Alias action [add]: [index] may not be empty string\\\"\",\n- e.getMessage(), containsString(\"Alias action [add]: [index] may not be empty string\"));\n+ assertThat(\"Exception text does not contain \\\"Alias action [add]: [index/indices] may not be empty string\\\"\",\n+ e.getMessage(), containsString(\"Alias action [add]: [index/indices] may not be empty string\"));\n }\n }\n \n@@ -750,7 +754,13 @@ public void testAddAliasEmptyIndex() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(\"\", \"alias1\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[index] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"\", \"alias1\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n }\n }\n \n@@ -759,7 +769,19 @@ public void testAddAliasNullAlias() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(\"index1\", null)).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[alias] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"index1\", (String)null)).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"index1\", (String[])null)).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] is either missing or null\"));\n }\n }\n \n@@ -768,7 +790,13 @@ public void testAddAliasEmptyAlias() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(\"index1\", \"\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[alias] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"index1\", \"\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n }\n }\n \n@@ -780,6 +808,13 @@ public void testAddAliasNullAliasNullIndex() {\n assertThat(e.validationErrors(), notNullValue());\n assertThat(e.validationErrors().size(), equalTo(2));\n }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, null, (String)null)).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n }\n \n public void testAddAliasEmptyAliasEmptyIndex() {\n@@ -790,14 +825,27 @@ public void testAddAliasEmptyAliasEmptyIndex() {\n assertThat(e.validationErrors(), notNullValue());\n assertThat(e.validationErrors().size(), equalTo(2));\n }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(ADD, \"\", \"\")).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n }\n \n public void testRemoveAliasNullIndex() {\n try {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newRemoveAliasAction(null, \"alias1\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[index] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, null, \"alias1\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n }\n }\n \n@@ -806,7 +854,13 @@ public void testRemoveAliasEmptyIndex() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newRemoveAliasAction(\"\", \"alias1\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[index] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"\", \"alias1\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[index/indices] may not be empty string\"));\n }\n }\n \n@@ -815,7 +869,19 @@ public void testRemoveAliasNullAlias() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newRemoveAliasAction(\"index1\", null)).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[alias] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"index1\", (String)null)).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"index1\", (String[])null)).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] is either missing or null\"));\n }\n }\n \n@@ -824,7 +890,13 @@ public void testRemoveAliasEmptyAlias() {\n admin().indices().prepareAliases().addAliasAction(AliasAction.newRemoveAliasAction(\"index1\", \"\")).get();\n fail(\"Expected ActionRequestValidationException\");\n } catch (ActionRequestValidationException e) {\n- assertThat(e.getMessage(), containsString(\"[alias] may not be empty string\"));\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"index1\", \"\")).get();\n+ fail(\"Expected ActionRequestValidationException\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.getMessage(), containsString(\"[alias/aliases] may not be empty string\"));\n }\n }\n \n@@ -836,6 +908,20 @@ public void testRemoveAliasNullAliasNullIndex() {\n assertThat(e.validationErrors(), notNullValue());\n assertThat(e.validationErrors().size(), equalTo(2));\n }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, null, (String)null)).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, (String[])null, (String[])null)).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n }\n \n public void testRemoveAliasEmptyAliasEmptyIndex() {\n@@ -846,6 +932,13 @@ public void testRemoveAliasEmptyAliasEmptyIndex() {\n assertThat(e.validationErrors(), notNullValue());\n assertThat(e.validationErrors().size(), equalTo(2));\n }\n+ try {\n+ admin().indices().prepareAliases().addAliasAction(new AliasActions(REMOVE, \"\", \"\")).get();\n+ fail(\"Should throw \" + ActionRequestValidationException.class.getSimpleName());\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(e.validationErrors(), notNullValue());\n+ assertThat(e.validationErrors().size(), equalTo(2));\n+ }\n }\n \n public void testGetAllAliasesWorks() {", "filename": "core/src/test/java/org/elasticsearch/aliases/IndexAliasesIT.java", "status": "modified" }, { "diff": "@@ -19,6 +19,9 @@\n index: test_index\n alias: test_alias\n routing: routing_value\n+ filter:\n+ ids:\n+ values: [\"1\", \"2\", \"3\"]\n \n - do:\n indices.exists_alias:\n@@ -31,7 +34,7 @@\n index: test_index\n name: test_alias\n \n- - match: {test_index.aliases.test_alias: {'index_routing': 'routing_value', 'search_routing': 'routing_value'}}\n+ - match: {test_index.aliases.test_alias: {filter: { ids : { values: [\"1\", \"2\", \"3\"]}}, 'index_routing': 'routing_value', 'search_routing': 'routing_value'}}\n \n ---\n \"Basic test for multiple aliases\":", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.update_aliases/10_basic.yaml", "status": "modified" } ] }
{ "body": "A user in the [watcher forum](https://discuss.elastic.co/t/failed-to-execute-watch-transform-noclassdeffounderror/41257/5) hit a `ClassNotFoundError`, after a watch ran successful several times.\n\nTurns out this is a groovy issue and can be reproduced using this test:\n\n``` java\npackage org.elasticsearch.script.groovy;\n\nimport org.elasticsearch.common.settings.Settings;\nimport org.elasticsearch.script.CompiledScript;\nimport org.elasticsearch.script.ExecutableScript;\nimport org.elasticsearch.script.ScriptService;\nimport org.elasticsearch.test.ESTestCase;\n\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\npublic class GroovyCompilationTests extends ESTestCase {\n\n private GroovyScriptEngineService se;\n\n @Override\n public void setUp() throws Exception {\n super.setUp();\n se = new GroovyScriptEngineService(Settings.EMPTY);\n // otherwise will exit your VM and other bad stuff\n assumeTrue(\"test requires security manager to be enabled\", System.getSecurityManager() != null);\n }\n\n @Override\n public void tearDown() throws Exception {\n se.close();\n super.tearDown();\n }\n\n public void testThatPotentialBugWithSeveralExecutions() throws Exception {\n String script = \"return hits.collect({\\\"${it.message}\\\"})\";\n\n Map<String, Object> data = new HashMap<>();\n data.put(\"message\", \"The big lebowski\");\n\n List<Object> hitsArray = new ArrayList<>();\n hitsArray.add(data);\n\n Map<String, Object> hits = new HashMap<>();\n hits.put(\"hits\", hitsArray);\n\n Object compiledObject = se.compile(script, Collections.emptyMap());\n CompiledScript compiledScript = new CompiledScript(ScriptService.ScriptType.INLINE, \"test\", \"groovy\", compiledObject);\n ExecutableScript executable = se.executable(compiledScript, hits);\n\n // 100 runs will fail, 10 runs work..\n for (int i = 0; i < 100; i++) {\n executable.run();\n }\n }\n\n}\n```\n\n@ywelsch already found out, that one can trigger this immediately by setting the `-Dsun.reflect.noInflation=true` property.\n\nFor more info about that, check out https://blogs.oracle.com/buck/entry/inflation_system_properties\n", "comments": [ { "body": "The reason it fails is that after 15 iterations inflation kicks in. An explanation of inflation is given here:\nhttps://blogs.oracle.com/buck/entry/inflation_system_properties\nThe following stackoverflow post also has some information:\nhttp://stackoverflow.com/questions/20527776/why-does-my-custom-securitymanager-cause-exceptions-the-16th-time-i-create-an-ob\n\nAdding the following permission to plugin-security.policy solves it:\n\n```\npermission org.elasticsearch.script.ClassPermission \"sun.reflect.MethodAccessorImpl\";\n```\n", "created_at": "2016-02-09T14:22:00Z" }, { "body": "@rmuir would it make sense to add `sun.reflect.ConstructorAccessorImpl` and `sun.reflect.MethodAccessorImpl` to `STANDARD_CLASSES` in `ClassPermission`?\nI'm not sure whether other script plugins might as well be affected by this issue.\n", "created_at": "2016-02-09T14:26:59Z" }, { "body": "The issue may be groovy specific. Groovy is the only one that tries to access sun.reflect directly, hence the only one with this:\n\n```\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.reflect\";\n```\n", "created_at": "2016-02-09T14:41:34Z" }, { "body": "Also note that groovy runtime already has ConstructorAccessor, because only a constructor was reflected in a loop:\n\n```\n permission org.elasticsearch.script.ClassPermission \"sun.reflect.ConstructorAccessorImpl\";\n```\n\nSo i would add the other inflation-related class right there beside it, just for groovy. We should add a loop test for the case. If we can add the same loop test to Javascript/Python to show it works fine without this, that would be nice.\n", "created_at": "2016-02-09T14:46:20Z" } ], "number": 16536, "title": "Groovy: Reflection causes exceptions after several runs" }
{ "body": "Groovy uses reflection to invoke closures. These reflective calls are optimized by the JVM after `sun.reflect.inflationThreshold` number of invocations.\nAfter inflation, access to `sun.reflect.MethodAccessorImpl` is required from the security manager.\n\nCloses #16536\n", "number": 16540, "review_comments": [], "title": "Add permission to access sun.reflect.MethodAccessorImpl from Groovy scripts" }
{ "commits": [ { "message": "Add permission to access sun.reflect.MethodAccessorImpl from Groovy scripts\n\nGroovy uses reflection to invoke closures. These reflective calls are optimized by the JVM after \"sun.reflect.inflationThreshold\" number of invocations.\nAfter inflation, access to sun.reflect.MethodAccessorImpl is required from the security manager.\n\nCloses #16536" } ], "files": [ { "diff": "@@ -49,6 +49,7 @@ grant {\n permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation\";\n permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.vmplugin.v7.IndyInterface\";\n permission org.elasticsearch.script.ClassPermission \"sun.reflect.ConstructorAccessorImpl\";\n+ permission org.elasticsearch.script.ClassPermission \"sun.reflect.MethodAccessorImpl\";\n \n permission org.elasticsearch.script.ClassPermission \"groovy.lang.Closure\";\n permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.runtime.GeneratedClosure\";", "filename": "modules/lang-groovy/src/main/plugin-metadata/plugin-security.policy", "status": "modified" }, { "diff": "@@ -90,6 +90,10 @@ public void testEvilGroovyScripts() throws Exception {\n // Groovy closures\n assertSuccess(\"[1, 2, 3, 4].findAll { it % 2 == 0 }\");\n assertSuccess(\"def buckets=[ [2, 4, 6, 8], [10, 12, 16, 14], [18, 22, 20, 24] ]; buckets[-3..-1].every { it.every { i -> i % 2 == 0 } }\");\n+ // Groovy uses reflection to invoke closures. These reflective calls are optimized by the JVM after \"sun.reflect.inflationThreshold\"\n+ // invocations. After the inflation step, access to sun.reflect.MethodAccessorImpl is required from the security manager. This test,\n+ // assuming a inflation threshold below 100 (15 is current value on Oracle JVMs), checks that the relevant permission is available.\n+ assertSuccess(\"(1..100).collect{ it + 1 }\");\n \n // Fail cases:\n assertFailure(\"pr = Runtime.getRuntime().exec(\\\"touch /tmp/gotcha\\\"); pr.waitFor()\", MissingPropertyException.class);", "filename": "modules/lang-groovy/src/test/java/org/elasticsearch/script/groovy/GroovySecurityTests.java", "status": "modified" }, { "diff": "@@ -84,6 +84,7 @@ private void assertFailure(String script, Class<? extends Throwable> exceptionCl\n public void testOK() {\n assertSuccess(\"1 + 2\");\n assertSuccess(\"Math.cos(Math.PI)\");\n+ assertSuccess(\"Array.apply(null, Array(100)).map(function (_, i) {return i;}).map(function (i) {return i+1;})\");\n }\n \n /** Test some javascripts that should hit security exception */", "filename": "plugins/lang-javascript/src/test/java/org/elasticsearch/script/javascript/JavaScriptSecurityTests.java", "status": "modified" }, { "diff": "@@ -82,6 +82,7 @@ private void assertFailure(String script) {\n public void testOK() {\n assertSuccess(\"1 + 2\");\n assertSuccess(\"from java.lang import Math\\nMath.cos(0)\");\n+ assertSuccess(\"map(lambda x: x + 1, range(100))\");\n }\n \n /** Test some py scripts that should hit security exception */", "filename": "plugins/lang-python/src/test/java/org/elasticsearch/script/python/PythonSecurityTests.java", "status": "modified" } ] }
{ "body": "In a mixed cluster with a newer master node and a mixture of older and newer data nodes, shards from a snapshot compatible with newer data nodes might be randomly assigned to older data nodes where they cannot be restored.\n\nThe problem is reproducible using SnapshotBackwardsCompatibilityIT.testBasicWorkflow with the following command line in 2.x branch\n\n```\nmvn verify -Pdev -Dskip.unit.tests -pl org.elasticsearch.qa.backwards:2.1 -Dtests.seed=A8C99BB8980117DF -Dtests.class=org.elasticsearch.bwcompat.SnapshotBackwardsCompatibilityIT -Dtests.method=\"testBasicWorkflow\" -Des.logger.level=DEBUG -Dtests.assertion.disabled=false -Dtests.security.manager=true -Dtests.nightly=false -Dtests.heap.size=512m -Des.node.mode=local -Dtests.locale=fr-BE -Dtests.timezone=Asia/Thimphu\n```\n\nThis test fails because it tries to allocated shards from the snapshot created on 2.x to nodes that are running 2.1.\n", "comments": [ { "body": "Might a better solution be to disallow restores in a mixed cluster of master/data nodes?\n", "created_at": "2016-02-13T20:21:38Z" }, { "body": "@clintongormley I don't think we can guarantee that an old nodes will not join the cluster later on - after we performed an initial check and allowed the restore to start. So, checking the state at the beginning of the restore will not be a complete solution.\n", "created_at": "2016-02-16T17:25:54Z" } ], "number": 16519, "title": "Newer snapshot shards may be allocated on older nodes" }
{ "body": "Verifies that the version of a node is compatible with the version of a shard that's being restored on this node.\n\nFixes #16519\n", "number": 16520, "review_comments": [], "title": "Add node version check to shard allocation during restore" }
{ "commits": [ { "message": "Add node version check to shard allocation during restore\n\nVerifies that the version of a node is compatible with the version of a shard that's being restored on this node.\n\nFixes #16519" } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.cluster.routing.allocation.decider;\n \n+import org.elasticsearch.cluster.routing.RestoreSource;\n import org.elasticsearch.cluster.routing.RoutingNode;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.ShardRouting;\n@@ -46,8 +47,13 @@ public NodeVersionAllocationDecider(Settings settings) {\n public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) {\n if (shardRouting.primary()) {\n if (shardRouting.currentNodeId() == null) {\n- // fresh primary, we can allocate wherever\n- return allocation.decision(Decision.YES, NAME, \"primary shard can be allocated anywhere\");\n+ if (shardRouting.restoreSource() != null) {\n+ // restoring from a snapshot - check that the node can handle the version\n+ return isVersionCompatible(shardRouting.restoreSource(), node, allocation);\n+ } else {\n+ // fresh primary, we can allocate wherever\n+ return allocation.decision(Decision.YES, NAME, \"primary shard can be allocated anywhere\");\n+ }\n } else {\n // relocating primary, only migrate to newer host\n return isVersionCompatible(allocation.routingNodes(), shardRouting.currentNodeId(), node, allocation);\n@@ -77,4 +83,15 @@ private Decision isVersionCompatible(final RoutingNodes routingNodes, final Stri\n target.node().version(), source.node().version());\n }\n }\n+\n+ private Decision isVersionCompatible(RestoreSource restoreSource, final RoutingNode target, RoutingAllocation allocation) {\n+ if (target.node().version().onOrAfter(restoreSource.version())) {\n+ /* we can allocate if we can restore from a snapshot that is older or on the same version */\n+ return allocation.decision(Decision.YES, NAME, \"target node version [%s] is same or newer than snapshot version [%s]\",\n+ target.node().version(), restoreSource.version());\n+ } else {\n+ return allocation.decision(Decision.NO, NAME, \"target node version [%s] is older than snapshot version [%s]\",\n+ target.node().version(), restoreSource.version());\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/NodeVersionAllocationDecider.java", "status": "modified" }, { "diff": "@@ -20,14 +20,17 @@\n package org.elasticsearch.cluster.routing.allocation;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.EmptyClusterInfoService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.metadata.SnapshotId;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n+import org.elasticsearch.cluster.routing.RestoreSource;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n@@ -39,6 +42,7 @@\n import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;\n import org.elasticsearch.cluster.routing.allocation.decider.ClusterRebalanceAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.NodeVersionAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.ReplicaAfterPrimaryActiveAllocationDecider;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n@@ -337,6 +341,37 @@ public void testRebalanceDoesNotAllocatePrimaryAndReplicasOnDifferentVersionNode\n assertThat(result.routingTable().index(shard1.getIndex()).shardsWithState(ShardRoutingState.RELOCATING).size(), equalTo(0));\n }\n \n+\n+ public void testRestoreDoesNotAllocateSnapshotOnOlderNodes() {\n+ final DiscoveryNode newNode = new DiscoveryNode(\"newNode\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n+ final DiscoveryNode oldNode1 = new DiscoveryNode(\"oldNode1\", DummyTransportAddress.INSTANCE, VersionUtils.getPreviousVersion());\n+ final DiscoveryNode oldNode2 = new DiscoveryNode(\"oldNode2\", DummyTransportAddress.INSTANCE, VersionUtils.getPreviousVersion());\n+\n+ int numberOfShards = randomIntBetween(1, 3);\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)).numberOfShards(numberOfShards).numberOfReplicas\n+ (randomIntBetween(0, 3)))\n+ .build();\n+\n+ ClusterState state = ClusterState.builder(ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsRestore(metaData.index(\"test\"), new RestoreSource(new SnapshotId(\"rep1\", \"snp1\"),\n+ Version.CURRENT, \"test\")).build())\n+ .nodes(DiscoveryNodes.builder().put(newNode).put(oldNode1).put(oldNode2)).build();\n+ AllocationDeciders allocationDeciders = new AllocationDeciders(Settings.EMPTY, new AllocationDecider[]{\n+ new ReplicaAfterPrimaryActiveAllocationDecider(Settings.EMPTY),\n+ new NodeVersionAllocationDecider(Settings.EMPTY)});\n+ AllocationService strategy = new MockAllocationService(Settings.EMPTY,\n+ allocationDeciders,\n+ new ShardsAllocators(Settings.EMPTY, NoopGatewayAllocator.INSTANCE), EmptyClusterInfoService.INSTANCE);\n+ RoutingAllocation.Result result = strategy.reroute(state, new AllocationCommands(), true);\n+\n+ // Make sure that primary shards are only allocated on the new node\n+ for (int i = 0; i < numberOfShards; i++) {\n+ assertEquals(\"newNode\", result.routingTable().index(\"test\").getShards().get(i).primaryShard().currentNodeId());\n+ }\n+ }\n+\n private ClusterState stabilize(ClusterState clusterState, AllocationService service) {\n logger.trace(\"RoutingNodes: {}\", clusterState.getRoutingNodes().prettyPrint());\n ", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/NodeVersionAllocationDeciderTests.java", "status": "modified" } ] }
{ "body": "https://www.elastic.co/guide/en/elasticsearch/reference/2.2/agg-metadata.html\n\nThis works fine for bucket and metrics aggregations, but for pipeline aggregations the supplied metadata isn't returned with the result.\n\nThis is a blocker for the .NET client as we're trying to leverage aggregation metadata for deserialization.\n", "comments": [ { "body": "@gmarz which pipeline aggregation are you seeing this with? (or is it all of them?) Do you have a recreation script I can run to show the bug?\n", "created_at": "2016-02-09T17:17:57Z" }, { "body": "Actually never mind, didn't see you already have a fix\n", "created_at": "2016-02-09T17:18:30Z" }, { "body": "Yea, sorry I wasn't clear enough originally, it's with all pipeline aggregations.\n", "created_at": "2016-02-09T17:30:06Z" } ], "number": 16484, "title": "Pipeline aggregations do not return metadata" }
{ "body": "Closes #16484\n", "number": 16516, "review_comments": [], "title": "Set meta data for pipeline aggregations" }
{ "commits": [ { "message": "Set meta data for pipeline aggregations\n\nCloses #16484" }, { "message": "Fix MetricsAggregationBuilder missing the ability to set meta data" } ], "files": [ { "diff": "@@ -232,6 +232,9 @@ private AggregatorFactories parseAggregators(XContentParser parser, SearchContex\n pipelineAggregatorFactory\n .validate(null, factories.getAggregatorFactories(), factories.getPipelineAggregatorFactories());\n }\n+ if (metaData != null) {\n+ pipelineAggregatorFactory.setMetaData(metaData);\n+ }\n factories.addPipelineAggregator(pipelineAggregatorFactory);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregatorParsers.java", "status": "modified" }, { "diff": "@@ -23,19 +23,34 @@\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n \n import java.io.IOException;\n+import java.util.Map;\n \n /**\n * Base builder for metrics aggregations.\n */\n public abstract class MetricsAggregationBuilder<B extends MetricsAggregationBuilder<B>> extends AbstractAggregationBuilder {\n \n+ private Map<String, Object> metaData;\n+\n public MetricsAggregationBuilder(String name, String type) {\n super(name, type);\n }\n \n+ /**\n+ * Sets the meta data to be included in the metric aggregator's response\n+ */\n+ public B setMetaData(Map<String, Object> metaData) {\n+ this.metaData = metaData;\n+ return (B) this;\n+ }\n+\n @Override\n public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(getName()).startObject(type);\n+ builder.startObject(getName());\n+ if (this.metaData != null) {\n+ builder.field(\"meta\", this.metaData);\n+ }\n+ builder.startObject(type);\n internalXContent(builder, params);\n return builder.endObject().endObject();\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/MetricsAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -19,56 +19,35 @@\n \n package org.elasticsearch.search.aggregations;\n \n-import com.carrotsearch.hppc.IntIntHashMap;\n-import com.carrotsearch.hppc.IntIntMap;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.search.aggregations.bucket.missing.Missing;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.metrics.sum.Sum;\n+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.InternalBucketMetricValue;\n import org.elasticsearch.test.ESIntegTestCase;\n \n import java.util.HashMap;\n+import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.search.aggregations.AggregationBuilders.missing;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n+import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.maxBucket;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n-import static org.hamcrest.CoreMatchers.equalTo;\n \n-/**\n- *\n- */\n+\n public class MetaDataIT extends ESIntegTestCase {\n \n- /**\n- * Making sure that if there are multiple aggregations, working on the same field, yet require different\n- * value source type, they can all still work. It used to fail as we used to cache the ValueSource by the\n- * field name. If the cached value source was of type \"bytes\" and another aggregation on the field required to see\n- * it as \"numeric\", it didn't work. Now we cache the Value Sources by a custom key (field name + ValueSource type)\n- * so there's no conflict there.\n- */\n public void testMetaDataSetOnAggregationResult() throws Exception {\n-\n createIndex(\"idx\");\n IndexRequestBuilder[] builders = new IndexRequestBuilder[randomInt(30)];\n- IntIntMap values = new IntIntHashMap();\n- long missingValues = 0;\n for (int i = 0; i < builders.length; i++) {\n String name = \"name_\" + randomIntBetween(1, 10);\n- if (rarely()) {\n- missingValues++;\n- builders[i] = client().prepareIndex(\"idx\", \"type\").setSource(jsonBuilder()\n- .startObject()\n- .field(\"name\", name)\n- .endObject());\n- } else {\n- int value = randomIntBetween(1, 10);\n- values.put(value, values.getOrDefault(value, 0) + 1);\n- builders[i] = client().prepareIndex(\"idx\", \"type\").setSource(jsonBuilder()\n- .startObject()\n- .field(\"name\", name)\n- .field(\"value\", value)\n- .endObject());\n- }\n+ builders[i] = client().prepareIndex(\"idx\", \"type\").setSource(jsonBuilder()\n+ .startObject()\n+ .field(\"name\", name)\n+ .field(\"value\", randomInt())\n+ .endObject());\n }\n indexRandom(true, builders);\n ensureSearchable();\n@@ -77,27 +56,56 @@ public void testMetaDataSetOnAggregationResult() throws Exception {\n put(\"nested\", \"value\");\n }};\n \n- Map<String, Object> missingValueMetaData = new HashMap<String, Object>() {{\n+ Map<String, Object> metaData = new HashMap<String, Object>() {{\n put(\"key\", \"value\");\n put(\"numeric\", 1.2);\n put(\"bool\", true);\n put(\"complex\", nestedMetaData);\n }};\n \n SearchResponse response = client().prepareSearch(\"idx\")\n- .addAggregation(missing(\"missing_values\").field(\"value\").setMetaData(missingValueMetaData))\n+ .addAggregation(\n+ terms(\"the_terms\")\n+ .setMetaData(metaData)\n+ .field(\"name\")\n+ .subAggregation(\n+ sum(\"the_sum\")\n+ .setMetaData(metaData)\n+ .field(\"value\")\n+ )\n+ )\n+ .addAggregation(\n+ maxBucket(\"the_max_bucket\")\n+ .setMetaData(metaData)\n+ .setBucketsPaths(\"the_terms>the_sum\")\n+ )\n .execute().actionGet();\n \n assertSearchResponse(response);\n \n Aggregations aggs = response.getAggregations();\n assertNotNull(aggs);\n \n- Missing missing = aggs.get(\"missing_values\");\n- assertNotNull(missing);\n- assertThat(missing.getDocCount(), equalTo(missingValues));\n+ Terms terms = aggs.get(\"the_terms\");\n+ assertNotNull(terms);\n+ assertMetaData(terms.getMetaData());\n+\n+ List<? extends Terms.Bucket> buckets = terms.getBuckets();\n+ for (Terms.Bucket bucket : buckets) {\n+ Aggregations subAggs = bucket.getAggregations();\n+ assertNotNull(subAggs);\n+\n+ Sum sum = subAggs.get(\"the_sum\");\n+ assertNotNull(sum);\n+ assertMetaData(sum.getMetaData());\n+ }\n+\n+ InternalBucketMetricValue maxBucket = aggs.get(\"the_max_bucket\");\n+ assertNotNull(maxBucket);\n+ assertMetaData(maxBucket.getMetaData());\n+ }\n \n- Map<String, Object> returnedMetaData = missing.getMetaData();\n+ private void assertMetaData(Map<String, Object> returnedMetaData) {\n assertNotNull(returnedMetaData);\n assertEquals(4, returnedMetaData.size());\n assertEquals(\"value\", returnedMetaData.get(\"key\"));", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/MetaDataIT.java", "status": "modified" } ] }
{ "body": "Tested on 2.1.1:\n\n```\n \"translog\" : {\n \"recovered\" : 13,\n \"total\" : -1,\n \"percent\" : \"-1.0%\",\n \"total_on_start\" : -1,\n \"total_time_in_millis\" : 340\n },\n```\n\nI have 13 entries in the translog for a shard, once it is done replaying on startup, if I hit the _recovery api and look at the translog section for the shard, it shows 13 recovered. But the percent and total metrics are negative which is misleading.\n", "comments": [ { "body": "I also have recently seen this, on 2.1.0, on three shards stuck in `TRANSLOG` stage after an unfortunate index `_open`/`_close` procedure went bad.\n\nUnfortunately, I did not fully understand what happened, but seeing those negative metrics in the `translog` section lead me to think the the index must have been corrupted.\n", "created_at": "2016-01-14T12:58:37Z" }, { "body": "I've been seeing similar since we moved to 2.1 along with length recovery times when we hit translog stage\n\n```\ntranslog: {\n recovered: 16207,\n total: -1,\n percent: \"-1.0%\",\n total_on_start: -1,\n total_time_in_millis: 1511096\n}\n```\n", "created_at": "2016-01-17T17:34:29Z" }, { "body": "Same. Been seeing exceptionally long recovery times for just a handful of not incredibly large shards.\n\n```\n \"translog\": {\n \"recovered\": 2413666,\n \"total\": -1,\n \"percent\": \"-1.0%\",\n \"total_on_start\": -1,\n \"total_time\": \"4.5h\",\n \"total_time_in_millis\": 16535797\n },\n```\n", "created_at": "2016-02-06T04:54:53Z" } ], "number": 15974, "title": "_recovery api shows negative metrics after translog replay" }
{ "body": "Recovery from store fails to correctly set the translog recovery stats. This fixes it and tightens up the logic bringing it all to IndexShard (previously it was set by the recovery logic).\n\nCloses #15974\n", "number": 16493, "review_comments": [ { "body": "any chance we can assign the `recoveryState().getTranslog()` somewhere and don't chain the calls all the time?\n", "created_at": "2016-02-08T08:42:31Z" }, { "body": "I looked at it and tried but we only do it in 2 places and make the translog recovery state stats a thing will introduce unneeded bloot imh. I simplified things a (tiny) bit.\n", "created_at": "2016-02-08T10:33:34Z" } ], "title": "Fix recovery translog stats totals when recovering from store" }
{ "commits": [ { "message": "Fix recovery translog stats totals when recovering from store\n\nRecovery from store fails to correctly set the translog recovery stats. This fixes it and tightens up the logic bringing it all to IndexShard (previously it was set by the recovery logic).\n\nCloses #15974" }, { "message": "slightly less chaining" } ], "files": [ { "diff": "@@ -451,7 +451,7 @@ void cancelRelocation() {\n }\n \n /**\n- * Moves the shard from started to initializing and bumps the version\n+ * Moves the shard from started to initializing\n */\n void reinitializeShard() {\n ensureNotFrozen();", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java", "status": "modified" }, { "diff": "@@ -43,7 +43,6 @@\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.InfoStream;\n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.cluster.routing.Murmur3HashFunction;\n import org.elasticsearch.common.Nullable;\n@@ -68,7 +67,6 @@\n import org.elasticsearch.index.translog.Translog;\n import org.elasticsearch.index.translog.TranslogConfig;\n import org.elasticsearch.index.translog.TranslogCorruptedException;\n-import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.IOException;\n@@ -233,20 +231,7 @@ protected void recoverFromTranslog(EngineConfig engineConfig, Translog.TranslogG\n final TranslogRecoveryPerformer handler = engineConfig.getTranslogRecoveryPerformer();\n try {\n Translog.Snapshot snapshot = translog.newSnapshot();\n- Translog.Operation operation;\n- while ((operation = snapshot.next()) != null) {\n- try {\n- handler.performRecoveryOperation(this, operation, true);\n- opsRecovered++;\n- } catch (ElasticsearchException e) {\n- if (e.status() == RestStatus.BAD_REQUEST) {\n- // mainly for MapperParsingException and Failure to detect xcontent\n- logger.info(\"ignoring recovery of a corrupt translog entry\", e);\n- } else {\n- throw e;\n- }\n- }\n- }\n+ opsRecovered = handler.recoveryFromSnapshot(this, snapshot);\n } catch (Throwable e) {\n throw new EngineException(shardId, \"failed to recover from translog\", e);\n }", "filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java", "status": "modified" }, { "diff": "@@ -55,6 +55,7 @@\n import org.elasticsearch.index.IndexModule;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.NodeServicesProvider;\n+import org.elasticsearch.index.SearchSlowLog;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.cache.IndexCache;\n import org.elasticsearch.index.cache.bitset.ShardBitsetFilterCache;\n@@ -89,13 +90,12 @@\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.recovery.RecoveryStats;\n import org.elasticsearch.index.refresh.RefreshStats;\n-import org.elasticsearch.index.SearchSlowLog;\n import org.elasticsearch.index.search.stats.SearchStats;\n import org.elasticsearch.index.search.stats.ShardSearchStats;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.index.snapshots.IndexShardRepository;\n-import org.elasticsearch.index.store.Store.MetadataSnapshot;\n import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.index.store.Store.MetadataSnapshot;\n import org.elasticsearch.index.store.StoreFileMetaData;\n import org.elasticsearch.index.store.StoreStats;\n import org.elasticsearch.index.suggest.stats.ShardSuggestMetric;\n@@ -105,8 +105,8 @@\n import org.elasticsearch.index.translog.TranslogStats;\n import org.elasticsearch.index.warmer.ShardIndexWarmerService;\n import org.elasticsearch.index.warmer.WarmerStats;\n-import org.elasticsearch.indices.cache.query.IndicesQueryCache;\n import org.elasticsearch.indices.IndexingMemoryController;\n+import org.elasticsearch.indices.cache.query.IndicesQueryCache;\n import org.elasticsearch.indices.recovery.RecoveryFailedException;\n import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.percolator.PercolatorService;\n@@ -874,6 +874,12 @@ public int performBatchRecovery(Iterable<Translog.Operation> operations) {\n * After the store has been recovered, we need to start the engine in order to apply operations\n */\n public void performTranslogRecovery(boolean indexExists) {\n+ if (indexExists == false) {\n+ // note: these are set when recovering from the translog\n+ final RecoveryState.Translog translogStats = recoveryState().getTranslog();\n+ translogStats.totalOperations(0);\n+ translogStats.totalOperationsOnStart(0);\n+ }\n internalPerformTranslogRecovery(false, indexExists);\n assert recoveryState.getStage() == RecoveryState.Stage.TRANSLOG : \"TRANSLOG stage expected but was: \" + recoveryState.getStage();\n }\n@@ -1387,6 +1393,15 @@ protected void operationProcessed() {\n assert recoveryState != null;\n recoveryState.getTranslog().incrementRecoveredOperations();\n }\n+\n+ @Override\n+ public int recoveryFromSnapshot(Engine engine, Translog.Snapshot snapshot) throws IOException {\n+ assert recoveryState != null;\n+ RecoveryState.Translog translogStats = recoveryState.getTranslog();\n+ translogStats.totalOperations(snapshot.totalOperations());\n+ translogStats.totalOperationsOnStart(snapshot.totalOperations());\n+ return super.recoveryFromSnapshot(engine, snapshot);\n+ }\n };\n return new EngineConfig(shardId,\n threadPool, indexSettings, warmer, store, deletionPolicy, indexSettings.getMergePolicy(),", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -203,7 +203,6 @@ private void internalRecoverFromStore(IndexShard indexShard, boolean indexShould\n logger.trace(\"cleaning existing shard, shouldn't exists\");\n IndexWriter writer = new IndexWriter(store.directory(), new IndexWriterConfig(Lucene.STANDARD_ANALYZER).setOpenMode(IndexWriterConfig.OpenMode.CREATE));\n writer.close();\n- recoveryState.getTranslog().totalOperations(0);\n }\n }\n } catch (Throwable e) {\n@@ -224,10 +223,6 @@ private void internalRecoverFromStore(IndexShard indexShard, boolean indexShould\n } catch (IOException e) {\n logger.debug(\"failed to list file details\", e);\n }\n- if (indexShouldExists == false) {\n- recoveryState.getTranslog().totalOperations(0);\n- recoveryState.getTranslog().totalOperationsOnStart(0);\n- }\n indexShard.performTranslogRecovery(indexShouldExists);\n indexShard.finalizeRecovery();\n indexShard.postRecovery(\"post recovery from shard_store\");", "filename": "core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.translog.Translog;\n+import org.elasticsearch.rest.RestStatus;\n \n import java.io.IOException;\n import java.util.HashMap;\n@@ -77,6 +78,25 @@ int performBatchRecovery(Engine engine, Iterable<Translog.Operation> operations)\n return numOps;\n }\n \n+ public int recoveryFromSnapshot(Engine engine, Translog.Snapshot snapshot) throws IOException {\n+ Translog.Operation operation;\n+ int opsRecovered = 0;\n+ while ((operation = snapshot.next()) != null) {\n+ try {\n+ performRecoveryOperation(engine, operation, true);\n+ opsRecovered++;\n+ } catch (ElasticsearchException e) {\n+ if (e.status() == RestStatus.BAD_REQUEST) {\n+ // mainly for MapperParsingException and Failure to detect xcontent\n+ logger.info(\"ignoring recovery of a corrupt translog entry\", e);\n+ } else {\n+ throw e;\n+ }\n+ }\n+ }\n+ return opsRecovered;\n+ }\n+\n public static class BatchOperationException extends ElasticsearchException {\n \n private final int completedOperations;\n@@ -182,6 +202,7 @@ protected void operationProcessed() {\n // noop\n }\n \n+\n /**\n * Returns the recovered types modifying the mapping during the recovery\n */", "filename": "core/src/main/java/org/elasticsearch/index/shard/TranslogRecoveryPerformer.java", "status": "modified" }, { "diff": "@@ -48,6 +48,11 @@ public static void reinit(ShardRouting routing) {\n routing.reinitializeShard();\n }\n \n+ public static void reinit(ShardRouting routing, UnassignedInfo.Reason reason) {\n+ routing.reinitializeShard();\n+ routing.updateUnassignedInfo(new UnassignedInfo(reason, \"test_reinit\"));\n+ }\n+\n public static void moveToUnassigned(ShardRouting routing, UnassignedInfo info) {\n routing.moveToUnassigned(info);\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/ShardRoutingHelper.java", "status": "modified" }, { "diff": "@@ -70,7 +70,6 @@\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.env.ShardLock;\n-import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.NodeServicesProvider;\n@@ -865,10 +864,11 @@ public void testRecoverFromStore() throws IOException {\n IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n IndexService test = indicesService.indexService(\"test\");\n final IndexShard shard = test.getShardOrNull(0);\n-\n+ int translogOps = 1;\n client().prepareIndex(\"test\", \"test\", \"0\").setSource(\"{}\").setRefresh(randomBoolean()).get();\n if (randomBoolean()) {\n client().admin().indices().prepareFlush().get();\n+ translogOps = 0;\n }\n ShardRouting routing = new ShardRouting(shard.routingEntry());\n test.removeShard(0, \"b/c simon says so\");\n@@ -878,13 +878,47 @@ public void testRecoverFromStore() throws IOException {\n DiscoveryNode localNode = new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n newShard.markAsRecovering(\"store\", new RecoveryState(newShard.shardId(), routing.primary(), RecoveryState.Type.STORE, localNode, localNode));\n assertTrue(newShard.recoverFromStore(localNode));\n+ assertEquals(translogOps, newShard.recoveryState().getTranslog().recoveredOperations());\n+ assertEquals(translogOps, newShard.recoveryState().getTranslog().totalOperations());\n+ assertEquals(translogOps, newShard.recoveryState().getTranslog().totalOperationsOnStart());\n+ assertEquals(100.0f, newShard.recoveryState().getTranslog().recoveredPercent(), 0.01f);\n routing = new ShardRouting(routing);\n ShardRoutingHelper.moveToStarted(routing);\n newShard.updateRoutingEntry(routing, true);\n SearchResponse response = client().prepareSearch().get();\n assertHitCount(response, 1);\n }\n \n+ public void testRecoverFromCleanStore() throws IOException {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ final IndexShard shard = test.getShardOrNull(0);\n+ client().prepareIndex(\"test\", \"test\", \"0\").setSource(\"{}\").setRefresh(randomBoolean()).get();\n+ if (randomBoolean()) {\n+ client().admin().indices().prepareFlush().get();\n+ }\n+ ShardRouting routing = new ShardRouting(shard.routingEntry());\n+ test.removeShard(0, \"b/c simon says so\");\n+ ShardRoutingHelper.reinit(routing, UnassignedInfo.Reason.INDEX_CREATED);\n+ IndexShard newShard = test.createShard(routing);\n+ newShard.updateRoutingEntry(routing, false);\n+ DiscoveryNode localNode = new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n+ newShard.markAsRecovering(\"store\", new RecoveryState(newShard.shardId(), routing.primary(), RecoveryState.Type.STORE, localNode,\n+ localNode));\n+ assertTrue(newShard.recoverFromStore(localNode));\n+ assertEquals(0, newShard.recoveryState().getTranslog().recoveredOperations());\n+ assertEquals(0, newShard.recoveryState().getTranslog().totalOperations());\n+ assertEquals(0, newShard.recoveryState().getTranslog().totalOperationsOnStart());\n+ assertEquals(100.0f, newShard.recoveryState().getTranslog().recoveredPercent(), 0.01f);\n+ routing = new ShardRouting(routing);\n+ ShardRoutingHelper.moveToStarted(routing);\n+ newShard.updateRoutingEntry(routing, true);\n+ SearchResponse response = client().prepareSearch().get();\n+ assertHitCount(response, 0);\n+ }\n+\n public void testFailIfIndexNotPresentInRecoverFromStore() throws IOException {\n createIndex(\"test\");\n ensureGreen();\n@@ -1187,7 +1221,8 @@ public void testTranslogRecoverySyncsTranslog() throws IOException {\n List<Translog.Operation> operations = new ArrayList<>();\n operations.add(new Translog.Index(\"testtype\", \"1\", jsonBuilder().startObject().field(\"foo\", \"bar\").endObject().bytes().toBytes()));\n newShard.prepareForIndexRecovery();\n- newShard.performTranslogRecovery(true);\n+ newShard.recoveryState().getTranslog().totalOperations(operations.size());\n+ newShard.skipTranslogRecovery();\n newShard.performBatchRecovery(operations);\n assertFalse(newShard.getTranslog().syncNeeded());\n }", "filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java", "status": "modified" } ] }
{ "body": "```\n# With a one-word query and minimum_should_match=-50%, adding extra non-matching fields should not matter.\n# Tested on v1.7.2.\n\n# delete and re-create the index\ncurl -XDELETE localhost:9200/test\ncurl -XPUT localhost:9200/test\n\necho \n\n # insert a document\ncurl -XPUT 'http://localhost:9200/test/test/1' -d '\n { \"title\": \"test document\"}\n '\ncurl -XPOST 'http://localhost:9200/test/_refresh'\n\necho \n\n # this correctly finds the document (f1 is a non-existent field)\n curl -XGET 'http://localhost:9200/test/test/_search' -d '{\n \"query\" : {\n \"simple_query_string\" : {\n \"fields\" : [ \"title\", \"f1\" ],\n \"query\" : \"test\",\n \"minimum_should_match\" : \"-50%\"\n }\n }\n }\n'\n\necho \n\n# this incorrectly does not find the document (f1 and f2 are non-existent fields)\ncurl -XGET 'http://localhost:9200/test/test/_search' -d '{\n \"query\" : {\n \"simple_query_string\" : {\n \"fields\" : [ \"title\", \"f1\", \"f2\" ],\n \"query\" : \"test\",\n \"minimum_should_match\" : \"-50%\"\n }\n }\n }\n'\n\necho\n```\n", "comments": [ { "body": "This does look like a bug. The min_should_match is being applied at the wrong level:\n\n```\nGET /test/test/_validate/query?explain\n{\n \"query\" : {\n \"simple_query_string\" : {\n \"fields\" : [ \"title\", \"f1\", \"f2\" ],\n \"query\" : \"test\",\n \"minimum_should_match\" : \"-50%\"\n }\n }\n}\n```\n\nReturns an explanation of:\n\n```\n+((f1:test title:test f2:test)~2)\n```\n\nWhile the `query_string` and `multi_match` equivalents return:\n\n```\n+(title:test | f1:test | f2:test)\n```\n", "created_at": "2015-10-02T15:53:05Z" }, { "body": "I had a look and saw that `simple_query_string` iterates over all fields for each token in the query string and combines them all in boolean query `should` clauses, and we apply the `minimum_should_match` on the whole result. \n\n@clintongormley As far as I understand you, we should parse the query string for each field separately, apply the `minimum_should_match` there and then combine the result in an overall Boolean query. This however raised another question for me. Suppose we have two terms like `\"query\" : \"test document\"` instead, then currently we we get:\n\n```\n((f1:test title:test f2:test) (f1:document title:document f2:document))~1\n```\n\nIf we would instead create the query per field individually we would get something like\n\n```\n((title:test title:document)~1 (f1:test f1:document)~1 (f2:test f2:document)~1)\n```\n\nWhile treating the query string for each field individually looks like the right behaviour in this case, I wonder if this will break other cases. wdyt?\n", "created_at": "2015-10-14T10:09:12Z" }, { "body": "@cbuescher the `query_string` query takes the same approach as your first output, ie:\n\n```\n((f1:test title:test f2:test) (f1:document title:document f2:document))~1\n```\n\nI think the bug is maybe a bit more subtle. A query across 3 fields for two terms with min should match 80% results in:\n\n```\nbool:\n min_should_match: 80% (==1)\n should:\n bool:\n should: [ f1:term1, f2:term1, f3:term1]\n bool:\n should: [ f1:term2, f2:term2, f3:term2]\n```\n\nhowever with only one term it is producing:\n\n```\nbool:\n min_should_match: 80% (==2) \n should: [ f1:term1, f2:term1, f3:term1]\n```\n\nIn other words, min should match is being applied to the wrong `bool` query. Instead, even the one term case should be wrapped in another `bool` query, and the min should match should be applied at that level.\n", "created_at": "2015-10-14T11:31:18Z" }, { "body": "@clintongormley Yes, I think thats what I meant. I'm working on a PR that applies the `minimum_should_match` to sub-queries that only target one field. That way your examples above would change to something like\n\n```\nbool: \n should:\n bool:\n min_should_match: 80% (==1)\n should: [ f1:term1, f1:term2]\n bool:\n min_should_match: 80% (==1)\n should: [ f2:term1, f2:term2]\n bool:\n min_should_match: 80% (==1)\n should: [ f3:term1, f3:term2]\n```\n\nand for one term\n\n```\nbool: \n should:\n bool:\n min_should_match: 80% (==0)\n should: [ f1:term1]\n bool:\n min_should_match: 80% (==0)\n should: [ f2:term1]\n bool:\n min_should_match: 80% (==0)\n should: [ f3:term1]\n```\n\nIn the later case we already additionally simplify one-term bool queries to TermQueries.\n", "created_at": "2015-10-14T12:03:12Z" }, { "body": "@cbuescher I think that is incorrect. The simple query string query (like the query string query) is term-centric rather than field-centric. In other words, min should match should be applied to the number of terms (regardless of which field the term is in).\n\nI'm guessing that there is an \"optimization\" for the one term case where the field-level bool clause is not wrapped in an outer bool clause. Then the min should match is applied at the field level instead of at the term level, resulting in the wrong calculation.\n", "created_at": "2015-10-15T11:22:45Z" }, { "body": "That guess seems right, there is an optimization in lucenes\nSimpleQueryParser for boolean queries with 0 or 1 clauses that seems to be\nthe problem. I think we can overwrite that.\n\nOn Thu, Oct 15, 2015 at 1:23 PM, Clinton Gormley notifications@github.com\nwrote:\n\n> @cbuescher https://github.com/cbuescher I think that is incorrect. The\n> simple query string query (like the query string query) is term-centric\n> rather than field-centric. In other words, min should match should be\n> applied to the number of terms (regardless of which field the term is in).\n> \n> I'm guessing that there is an \"optimization\" for the one term case where\n> the field-level bool clause is not wrapped in an outer bool clause. Then\n> the min should match is applied at the field level instead of at the term\n> level, resulting in the wrong calculation.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elastic/elasticsearch/issues/13884#issuecomment-148358711\n> .\n\n## \n\nChristoph Büscher\n", "created_at": "2015-10-15T12:49:18Z" }, { "body": "If this is in Lucene, perhaps it should be fixed there?\n\n@jdconrad what do you think?\n", "created_at": "2015-10-15T15:04:19Z" }, { "body": "@clintongormley I don't think SimpleQueryParser#simplify() is at the root of this anymore. The problem seems to be that SimpleQueryParser parses term by term-centric, but only starts wrapping the resulting queries when combining more than two of them. For one search term and two fields I get a Boolean query with two TermQuery clauses (without enclosing Boolean query), for two terms and one field I get an enclosing Boolean query with two Boolean query subclauses. I'm not sure yet how this can be distiguished from outside of the Lucene parser without inspecting the query, and if a solution like that holds for more complicated cases.\n", "created_at": "2015-10-15T15:15:15Z" }, { "body": "Althought it would be nice if Lucene SimpleQueryParse would output a Boolquery with one should-clause and three nested Boolqueries for the 1-term/multi-field case, I think we can detect this case and do the wrapping in the additional Boolquery in the SimpleQueryStringBuilder. I just opened a PR.\n", "created_at": "2015-10-19T08:58:58Z" }, { "body": "The SQP wasn't really designed around multi-field terms, but needed to have it added afterwards for use as a default field which is why the min-should-match never gets applied down at that the level. I don't know if the correct behavior is to make it work on multi-fields. I'll have to give that some thought given that it really is as @cbuescher described as term-centric, and it sort of supposed to be disguised from the user. One thing that will make this easier to fix, though, I believe is #4707, since it will flatten the parse tree a bit.\n", "created_at": "2015-10-19T17:15:49Z" }, { "body": "@jdconrad thanks for explaining, in the meantime I opened #14186 which basically tries to distinguish the one vs. multi-field cases and tries wraps the resulting query one more time to get a correct min-should-match. Please leave comment there if my current approach will colide the plans regarding #4707.\n", "created_at": "2015-10-20T09:10:12Z" }, { "body": "Reopening this issue since the fix proposed in #14186 was too fragile. Discussed with @javanna and @jpountz, at this point we think the options are either fixing this in lucenes SimpleQueryParser so that we can apply minimum_should_match correctly on the ES side or remove this option from `simple_query_string` entirely because it cannot properly supported.\n", "created_at": "2015-11-03T11:01:39Z" }, { "body": "Trying to sum up this issue so far: \n- the number of should-clauses returned by `SimpleQueryParser` is not 1 for one search term and multiple fields, so we cannot apply `minimum_should_match` correctly in `SimpleQueryStringBuilder`. e.g. for `\"query\" : \"term1\", \"fields\" : [ \"f1\", \"f2\", \"f3\" ]` SimpleQueryParser returns a BooleanQuery with three should-clauses. As soon as we add more search terms, the number of should-clauses is the same as the number of search terms, e.g. `\"query\" : \"term1 term2\", \"fields\" : [ \"f1\", \"f2\", \"f3\" ]` returns a BooleanQuery with two subclauses, one per term.\n- it is difficult to determine the number of terms from the query string upfront, because the tokenization depends on the analyzer used, so we really need `SimpleQueryParser#parse()` for this.\n- it is hard to determine the correct number of terms from the returned lucene query without making assumptions about the inner structure of the query (which is subject to change, reason for #14186 beeing reverted). e.g. currently `\"query\" : \"term1\", \"fields\" : [ \"f1\", \"f2\", \"f3\" ]` and `\"query\" : \"term1 term2 term3\", \"fields\" : [ \"f1\" ]` will return a BooleanQuery with same structure (three should-clauses, each containing a TermQuery). \n", "created_at": "2015-11-03T11:57:40Z" }, { "body": "@cbuescher this issue is fixed by https://github.com/elastic/elasticsearch/pull/16155. \n@rmuir has pointed out a nice way to distinguish between a single word query with multiple fields against a multi word query with a single field: we just have to check if the coord are disabled on the top level BooleanQuery, the simple query parser disables the coord when the boolean query for multiple fields is built.\nThough I tested the single word with multiple fields case only, if you think of other issues please reopen this ticket or open a new one ;).\n", "created_at": "2016-02-04T18:11:51Z" }, { "body": "@jimferenczi thats great, I just checked this with a test which is close to the problem desciption here. I'm not sure if this adds anything to your tests, but just in case I justed opened #16465 which adds this as an integration test for SimpleQueryStringQuery. Maybe you can take a look and tell me if it makes sense to add those as well.\n", "created_at": "2016-02-04T21:08:13Z" }, { "body": "@cbuescher thanks, a unit test in SimpleQueryStringBuilderTest could be useful as well. The integ test does not check the minimum should match that is applied (or not) to the boolean query. \n", "created_at": "2016-02-05T08:55:03Z" } ], "number": 13884, "title": "Bug with simple_query_string, minimum_should_match, and multiple fields." }
{ "body": "This adds a test case similar to the issue in #13884 which was fixed in #16155.\n", "number": 16465, "review_comments": [], "title": "Add test for minimum_should_match, one term and multiple fields" }
{ "commits": [ { "message": "Add test for minimum_should_match, one term and multiple fields\n\nThis adds a test case similar to the issue in #13884 which was\nfixed in #16155." } ], "files": [ { "diff": "@@ -1273,7 +1273,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]RandomQueryBuilder.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]RangeQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]ScoreModeTests.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]SimpleQueryStringBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]SpanMultiTermQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]SpanNotQueryBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]index[/\\\\]query[/\\\\]functionscore[/\\\\]FunctionScoreEquivalenceTests.java\" checks=\"LineLength\" />\n@@ -1447,7 +1446,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]search[/\\\\]query[/\\\\]ExistsIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]search[/\\\\]query[/\\\\]MultiMatchQueryIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]search[/\\\\]query[/\\\\]SearchQueryIT.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]search[/\\\\]query[/\\\\]SimpleQueryStringIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]search[/\\\\]rescore[/\\\\]QueryRescoreBuilderTests.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]search[/\\\\]scroll[/\\\\]DuelScrollIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]search[/\\\\]scroll[/\\\\]SearchScrollIT.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.randomizedtesting.generators.CodepointSetGenerator;\n import com.fasterxml.jackson.core.JsonParseException;\n import com.fasterxml.jackson.core.io.JsonStringEncoder;\n+\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n@@ -56,7 +57,6 @@\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.settings.SettingsFilter;\n import org.elasticsearch.common.settings.SettingsModule;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -89,9 +89,9 @@\n import org.elasticsearch.script.ScriptContextRegistry;\n import org.elasticsearch.script.ScriptEngineRegistry;\n import org.elasticsearch.script.ScriptEngineService;\n-import org.elasticsearch.script.ScriptSettings;\n import org.elasticsearch.script.ScriptModule;\n import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.script.ScriptSettings;\n import org.elasticsearch.search.SearchModule;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.test.ESTestCase;", "filename": "core/src/test/java/org/elasticsearch/index/query/AbstractQueryTestCase.java", "status": "modified" }, { "diff": "@@ -297,7 +297,8 @@ protected void doAssertLuceneQuery(SimpleQueryStringBuilder queryBuilder, Query\n } else if (queryBuilder.fields().size() == 0) {\n assertTermQuery(query, MetaData.ALL, queryBuilder.value());\n } else {\n- fail(\"Encountered lucene query type we do not have a validation implementation for in our \" + SimpleQueryStringBuilderTests.class.getSimpleName());\n+ fail(\"Encountered lucene query type we do not have a validation implementation for in our \"\n+ + SimpleQueryStringBuilderTests.class.getSimpleName());\n }\n }\n \n@@ -368,4 +369,37 @@ public void testFromJson() throws IOException {\n assertEquals(json, 2, parsed.fields().size());\n assertEquals(json, \"snowball\", parsed.analyzer());\n }\n+\n+ public void testMinimumShouldMatch() throws IOException {\n+ QueryShardContext shardContext = createShardContext();\n+ int numberOfTerms = randomIntBetween(1, 4);\n+ StringBuilder queryString = new StringBuilder();\n+ for (int i = 0; i < numberOfTerms; i++) {\n+ queryString.append(\"t\" + i + \" \");\n+ }\n+ SimpleQueryStringBuilder simpleQueryStringBuilder = new SimpleQueryStringBuilder(queryString.toString().trim());\n+ if (randomBoolean()) {\n+ simpleQueryStringBuilder.defaultOperator(Operator.AND);\n+ }\n+ int numberOfFields = randomIntBetween(1, 4);\n+ for (int i = 0; i < numberOfFields; i++) {\n+ simpleQueryStringBuilder.field(\"f\" + i);\n+ }\n+ int percent = randomIntBetween(1, 100);\n+ simpleQueryStringBuilder.minimumShouldMatch(percent + \"%\");\n+ Query query = simpleQueryStringBuilder.toQuery(shardContext);\n+\n+ // check special case: one term & one field should get simplified to a TermQuery\n+ if (numberOfFields * numberOfTerms == 1) {\n+ assertThat(query, instanceOf(TermQuery.class));\n+ } else {\n+ assertThat(query, instanceOf(BooleanQuery.class));\n+ BooleanQuery boolQuery = (BooleanQuery) query;\n+ int expectedMinimumShouldMatch = numberOfTerms * percent / 100;\n+ if (simpleQueryStringBuilder.defaultOperator().equals(Operator.AND) && numberOfTerms > 1) {\n+ expectedMinimumShouldMatch = 0;\n+ }\n+ assertEquals(expectedMinimumShouldMatch, boolQuery.getMinimumNumberShouldMatch());\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java", "status": "modified" }, { "diff": "@@ -116,12 +116,21 @@ public void testSimpleQueryStringMinimumShouldMatch() throws Exception {\n assertSearchHits(searchResponse, \"3\", \"4\");\n \n logger.info(\"--> query 2\");\n- searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar\").field(\"body\").field(\"body2\").minimumShouldMatch(\"2\")).get();\n+ searchResponse = client().prepareSearch()\n+ .setQuery(simpleQueryStringQuery(\"foo bar\").field(\"body\").field(\"body2\").minimumShouldMatch(\"2\")).get();\n assertHitCount(searchResponse, 2L);\n assertSearchHits(searchResponse, \"3\", \"4\");\n \n+ // test case from #13884\n logger.info(\"--> query 3\");\n- searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar baz\").field(\"body\").field(\"body2\").minimumShouldMatch(\"70%\")).get();\n+ searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo\")\n+ .field(\"body\").field(\"body2\").field(\"body3\").minimumShouldMatch(\"-50%\")).get();\n+ assertHitCount(searchResponse, 3L);\n+ assertSearchHits(searchResponse, \"1\", \"3\", \"4\");\n+\n+ logger.info(\"--> query 4\");\n+ searchResponse = client().prepareSearch()\n+ .setQuery(simpleQueryStringQuery(\"foo bar baz\").field(\"body\").field(\"body2\").minimumShouldMatch(\"70%\")).get();\n assertHitCount(searchResponse, 2L);\n assertSearchHits(searchResponse, \"3\", \"4\");\n \n@@ -131,18 +140,20 @@ public void testSimpleQueryStringMinimumShouldMatch() throws Exception {\n client().prepareIndex(\"test\", \"type1\", \"7\").setSource(\"body2\", \"foo bar\", \"other\", \"foo\"),\n client().prepareIndex(\"test\", \"type1\", \"8\").setSource(\"body2\", \"foo baz bar\", \"other\", \"foo\"));\n \n- logger.info(\"--> query 4\");\n- searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar\").field(\"body\").field(\"body2\").minimumShouldMatch(\"2\")).get();\n+ logger.info(\"--> query 5\");\n+ searchResponse = client().prepareSearch()\n+ .setQuery(simpleQueryStringQuery(\"foo bar\").field(\"body\").field(\"body2\").minimumShouldMatch(\"2\")).get();\n assertHitCount(searchResponse, 4L);\n assertSearchHits(searchResponse, \"3\", \"4\", \"7\", \"8\");\n \n- logger.info(\"--> query 5\");\n+ logger.info(\"--> query 6\");\n searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar\").minimumShouldMatch(\"2\")).get();\n assertHitCount(searchResponse, 5L);\n assertSearchHits(searchResponse, \"3\", \"4\", \"6\", \"7\", \"8\");\n \n- logger.info(\"--> query 6\");\n- searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar baz\").field(\"body2\").field(\"other\").minimumShouldMatch(\"70%\")).get();\n+ logger.info(\"--> query 7\");\n+ searchResponse = client().prepareSearch()\n+ .setQuery(simpleQueryStringQuery(\"foo bar baz\").field(\"body2\").field(\"other\").minimumShouldMatch(\"70%\")).get();\n assertHitCount(searchResponse, 3L);\n assertSearchHits(searchResponse, \"6\", \"7\", \"8\");\n }\n@@ -330,7 +341,8 @@ public void testSimpleQueryStringAnalyzeWildcard() throws ExecutionException, In\n indexRandom(true, client().prepareIndex(\"test1\", \"type1\", \"1\").setSource(\"location\", \"Köln\"));\n refresh();\n \n- SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"Köln*\").analyzeWildcard(true).field(\"location\")).get();\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(simpleQueryStringQuery(\"Köln*\").analyzeWildcard(true).field(\"location\")).get();\n assertNoFailures(searchResponse);\n assertHitCount(searchResponse, 1L);\n assertSearchHits(searchResponse, \"1\");", "filename": "core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java", "status": "modified" } ] }
{ "body": "Test failure: http://build-us-00.elastic.co/job/es_core_master_window-2008/2553/testReport/junit/org.elasticsearch.indices.state/RareClusterStateIT/testDeleteCreateInOneBulk/\n\nThe test fails due to a race in acquiring `ShardLock` locks. \n\nWhen an index is deleted, an asynchronous process is started to process pending deletes on shards of that index. This process first acquires all `ShardLock` locks for the given index in numeric shard order. Meanwhile, the new index can already have been created, and some shard locks can already be held due to shard creation in `IndicesClusterStateService.applyInitializingShard`. For example, shard 0 is locked by `processPendingDeletes` but shard 1 is locked by `applyInitializingShard`. In that case, `processPendingDeletes` cannot lock shard 1 and blocks (and will hold lock on shard 0 for 30 minutes). This means that shard 0 cannot be initialised for 30 minutes.\n\nInteresting bits of stack trace:\n\n```\n\"elasticsearch[node_t1][generic][T#2]\" ID=602 TIMED_WAITING on java.util.concurrent.Semaphore$NonfairSync@2fc45c3b\n at sun.misc.Unsafe.park(Native Method)\n - timed waiting on java.util.concurrent.Semaphore$NonfairSync@2fc45c3b\n at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)\n at java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:409)\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:555)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:485)\n at org.elasticsearch.env.NodeEnvironment.lockAllForIndex(NodeEnvironment.java:429)\n at org.elasticsearch.indices.IndicesService.processPendingDeletes(IndicesService.java:649)\n at org.elasticsearch.cluster.action.index.NodeIndexDeletedAction.lockIndexAndAck(NodeIndexDeletedAction.java:101)\n at org.elasticsearch.cluster.action.index.NodeIndexDeletedAction.access$300(NodeIndexDeletedAction.java:46)\n at org.elasticsearch.cluster.action.index.NodeIndexDeletedAction$1.doRun(NodeIndexDeletedAction.java:90)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n Locked synchronizers:\n - java.util.concurrent.ThreadPoolExecutor$Worker@b17810e\n\n\n\"elasticsearch[node_t1][clusterService#updateTask][T#1]\" ID=591 TIMED_WAITING on java.util.concurrent.Semaphore$NonfairSync@7fdcd730\n at sun.misc.Unsafe.park(Native Method)\n - timed waiting on java.util.concurrent.Semaphore$NonfairSync@7fdcd730\n at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)\n at java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:409)\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:555)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:485)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:234)\n - locked org.elasticsearch.index.IndexService@707e1798\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:628)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:528)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n - locked java.lang.Object@773b911a\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:517)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n Locked synchronizers:\n - java.util.concurrent.ThreadPoolExecutor$Worker@26f887da\n```\n", "comments": [ { "body": "really this is a bug in how elasticsearch works altogether. all these locks and all the algs we have work on the index name rather than on it's uuid which is a huge problem. That's the place to fix this rather than changing the way we process pending deletes.\n", "created_at": "2015-11-23T11:03:38Z" }, { "body": "we already have some issues related to this: https://github.com/elastic/elasticsearch/issues/13264 and https://github.com/elastic/elasticsearch/issues/13265 (this one is rather unrelated but goes into the same direction of safety)\n", "created_at": "2015-11-23T13:28:35Z" }, { "body": "I disabled the test on branches master, 2.x, 2.1 and 2.0.\n", "created_at": "2015-12-01T10:50:42Z" } ], "number": 14932, "title": "Processing pending deletes can block shard initialisation for 30 minutes" }
{ "body": "Following up https://github.com/elastic/elasticsearch/pull/16217 , This PR uses `${data.paths}/nodes/{node.id}/indices/{index.uuid}` \ninstead of `${data.paths}/nodes/{node.id}/indices/{index.name}` pattern to store index \nfolder on disk.\nThis way we avoid collision between indices that are named the same (deleted and recreated).\n\nCloses #13265\nCloses #13264\nCloses #14932\nCloses #15853\nCloses #14512\n", "number": 16442, "review_comments": [ { "body": "can we always use the Index class to refer to an index? we should get in this habit so we always have both name and uuid available.\n", "created_at": "2016-02-04T07:43:22Z" }, { "body": "this should really be `hasIndex(Index index)` no?\n", "created_at": "2016-02-04T16:37:31Z" }, { "body": "I think we should add a `Index#getPathIdentifier()` to have a single place to calculate it?\n", "created_at": "2016-02-04T16:38:21Z" }, { "body": "++\n", "created_at": "2016-02-04T16:38:29Z" }, { "body": "I think we need to fix this too that we don't read the index name from the directory name. We have to read some descriptor which I think is available everywhere now if not it's not an index. Then when we have that we can just use Index.java as a key\n", "created_at": "2016-02-04T16:40:16Z" }, { "body": "fixed\n", "created_at": "2016-02-05T03:23:35Z" }, { "body": "this has been removed\n", "created_at": "2016-02-05T03:23:39Z" }, { "body": "great idea, thanks for the suggestion!\n", "created_at": "2016-02-05T03:23:42Z" }, { "body": "Now we load all the indices state files upfront and then add them to dangling indices if they are not present in the cluster state or are not identified as dangling already. Is this what you were getting at by \"read some descriptor\"? This behaviour is different from before where we only used to load the state of indices that were relevant. We also don't try to rename the index according to the folder name, like before.\n", "created_at": "2016-02-05T03:32:13Z" }, { "body": "looking at the MetaDataStateFormat code, we should be able to read the format from the file - something stinks here :) . Also (different change) I think we should remove that setting and always write smile (and read what ever we find).\n", "created_at": "2016-03-08T12:52:09Z" }, { "body": "why do we need all these file copies? can't we just do a top level folder rename? consolidating to one folder is a 2.0 feature?\n", "created_at": "2016-03-08T13:04:44Z" }, { "body": "can we check the uuid and make sure it's the same and if not log a warning?\n", "created_at": "2016-03-08T13:08:25Z" }, { "body": "this feels too lenient to me. At the very least we should add a parameter indicating whether it's OK to be lenient (dangling indices detection ) and be strict (throw an exception on node start up). \n", "created_at": "2016-03-08T13:20:03Z" }, { "body": "I don't think this is possible any more? we added a filter on the path names?\n", "created_at": "2016-03-08T13:24:03Z" }, { "body": "> looking at the MetaDataStateFormat code, we should be able to read the format from the file\n\nDo you mean that MetaDataStateFormat#loadLatestState reads the format from the file? I hardcoded the format to be `SMILE`\n\n> Also (different change) I think we should remove that setting and always write smile (and read what ever we find).\n\nI will open an issue for this.\n", "created_at": "2016-03-09T14:30:59Z" }, { "body": "Simplified the upgrade by renaming the top level folder. Thanks for the suggestion.\n", "created_at": "2016-03-09T14:31:43Z" }, { "body": "added\n", "created_at": "2016-03-09T14:31:54Z" }, { "body": "Now we throw an ISE on startup when an invalid index folder name is found\n", "created_at": "2016-03-09T14:33:48Z" }, { "body": "I think we can make this trace?\n", "created_at": "2016-03-09T16:06:27Z" }, { "body": "can we add something that will explain why we're doing this of the unordained user? something ala upgrading indexing folder to new naming conventions\n", "created_at": "2016-03-09T16:08:06Z" }, { "body": "can we add a comment into why need this check? (we already renamed it before)\n", "created_at": "2016-03-09T16:11:33Z" }, { "body": "call it upgradeIndicesIfNeeded?\n", "created_at": "2016-03-09T16:11:37Z" }, { "body": "this might also be a `FileNotFoundException`? ie `} catch (NoSuchFileException| | FileNotFoundException ignored) {`\n", "created_at": "2016-03-13T13:25:33Z" }, { "body": "wow this is scary as shit I guess this means we are restarting multiple nodes at the same time in a full cluster restart. I think we can't do this neither support it on a shared FS. If we run into this situation we should fail and not swallow exceptions IMO\n", "created_at": "2016-03-13T13:28:39Z" }, { "body": "lets document that in the migration guides\n", "created_at": "2016-03-13T13:29:06Z" }, { "body": "@mikemccand this is unused and get removed in 4d38856f7017275e326df52a44b90662b2f3da6a - was this a mistake or did you just not remove this method?\n", "created_at": "2016-03-13T13:36:38Z" }, { "body": "this should never happen right? we just got it from a dir list?\n", "created_at": "2016-03-14T10:08:12Z" }, { "body": "can we log this in debug? we're going to log it on every node start.. \n", "created_at": "2016-03-14T10:08:50Z" }, { "body": "wondering if this should be a warn... it means we have an unknow folder in our universe? \n", "created_at": "2016-03-14T10:09:29Z" }, { "body": "can we name this readOnlyMetaDataMetaDataStateFormat\n", "created_at": "2016-03-14T10:11:00Z" } ], "title": "Rename index folder to index_uuid" }
{ "commits": [ { "message": "Add upgrader to upgrade old indices to new naming convention" }, { "message": "remove redundant getters in MetaData" }, { "message": "use index uuid as folder name to decouple index folder name from index name" }, { "message": "adapt tests to use index uuid as folder name" } ], "files": [ { "diff": "@@ -79,7 +79,7 @@ public ClusterStateHealth(MetaData clusterMetaData, RoutingTable routingTables)\n * @param clusterState The current cluster state. Must not be null.\n */\n public ClusterStateHealth(ClusterState clusterState) {\n- this(clusterState, clusterState.metaData().concreteAllIndices());\n+ this(clusterState, clusterState.metaData().getConcreteAllIndices());\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/cluster/health/ClusterStateHealth.java", "status": "modified" }, { "diff": "@@ -432,7 +432,7 @@ private Map<String, Set<String>> resolveSearchRoutingAllIndices(MetaData metaDat\n if (routing != null) {\n Set<String> r = Strings.splitStringByCommaToSet(routing);\n Map<String, Set<String>> routings = new HashMap<>();\n- String[] concreteIndices = metaData.concreteAllIndices();\n+ String[] concreteIndices = metaData.getConcreteAllIndices();\n for (String index : concreteIndices) {\n routings.put(index, r);\n }\n@@ -472,7 +472,7 @@ static boolean isExplicitAllPattern(List<String> aliasesOrIndices) {\n */\n boolean isPatternMatchingAllIndices(MetaData metaData, String[] indicesOrAliases, String[] concreteIndices) {\n // if we end up matching on all indices, check, if its a wildcard parameter, or a \"-something\" structure\n- if (concreteIndices.length == metaData.concreteAllIndices().length && indicesOrAliases.length > 0) {\n+ if (concreteIndices.length == metaData.getConcreteAllIndices().length && indicesOrAliases.length > 0) {\n \n //we might have something like /-test1,+test1 that would identify all indices\n //or something like /-test1 with test1 index missing and IndicesOptions.lenient()\n@@ -728,11 +728,11 @@ private boolean isEmptyOrTrivialWildcard(List<String> expressions) {\n \n private List<String> resolveEmptyOrTrivialWildcard(IndicesOptions options, MetaData metaData, boolean assertEmpty) {\n if (options.expandWildcardsOpen() && options.expandWildcardsClosed()) {\n- return Arrays.asList(metaData.concreteAllIndices());\n+ return Arrays.asList(metaData.getConcreteAllIndices());\n } else if (options.expandWildcardsOpen()) {\n- return Arrays.asList(metaData.concreteAllOpenIndices());\n+ return Arrays.asList(metaData.getConcreteAllOpenIndices());\n } else if (options.expandWildcardsClosed()) {\n- return Arrays.asList(metaData.concreteAllClosedIndices());\n+ return Arrays.asList(metaData.getConcreteAllClosedIndices());\n } else {\n assert assertEmpty : \"Shouldn't end up here\";\n return Collections.emptyList();", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java", "status": "modified" }, { "diff": "@@ -370,26 +370,14 @@ public ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> findM\n /**\n * Returns all the concrete indices.\n */\n- public String[] concreteAllIndices() {\n- return allIndices;\n- }\n-\n public String[] getConcreteAllIndices() {\n- return concreteAllIndices();\n- }\n-\n- public String[] concreteAllOpenIndices() {\n- return allOpenIndices;\n+ return allIndices;\n }\n \n public String[] getConcreteAllOpenIndices() {\n return allOpenIndices;\n }\n \n- public String[] concreteAllClosedIndices() {\n- return allClosedIndices;\n- }\n-\n public String[] getConcreteAllClosedIndices() {\n return allClosedIndices;\n }\n@@ -795,9 +783,9 @@ public static MetaData addDefaultUnitsIfNeeded(ESLogger logger, MetaData metaDat\n metaData.getIndices(),\n metaData.getTemplates(),\n metaData.getCustoms(),\n- metaData.concreteAllIndices(),\n- metaData.concreteAllOpenIndices(),\n- metaData.concreteAllClosedIndices(),\n+ metaData.getConcreteAllIndices(),\n+ metaData.getConcreteAllOpenIndices(),\n+ metaData.getConcreteAllClosedIndices(),\n metaData.getAliasAndIndexLookup());\n } else {\n // No changes:", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java", "status": "modified" }, { "diff": "@@ -0,0 +1,154 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util;\n+\n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n+import org.elasticsearch.gateway.MetaStateService;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexSettings;\n+\n+import java.io.FileNotFoundException;\n+import java.io.IOException;\n+import java.nio.file.Files;\n+import java.nio.file.NoSuchFileException;\n+import java.nio.file.Path;\n+import java.nio.file.StandardCopyOption;\n+\n+/**\n+ * Renames index folders from {index.name} to {index.uuid}\n+ */\n+public class IndexFolderUpgrader {\n+ private final NodeEnvironment nodeEnv;\n+ private final Settings settings;\n+ private final ESLogger logger = Loggers.getLogger(IndexFolderUpgrader.class);\n+ private final MetaDataStateFormat<IndexMetaData> indexStateFormat = readOnlyIndexMetaDataStateFormat();\n+\n+ /**\n+ * Creates a new upgrader instance\n+ * @param settings node settings\n+ * @param nodeEnv the node env to operate on\n+ */\n+ IndexFolderUpgrader(Settings settings, NodeEnvironment nodeEnv) {\n+ this.settings = settings;\n+ this.nodeEnv = nodeEnv;\n+ }\n+\n+ /**\n+ * Moves the index folder found in <code>source</code> to <code>target</code>\n+ */\n+ void upgrade(final Index index, final Path source, final Path target) throws IOException {\n+ boolean success = false;\n+ try {\n+ Files.move(source, target, StandardCopyOption.ATOMIC_MOVE);\n+ success = true;\n+ } catch (NoSuchFileException | FileNotFoundException exception) {\n+ // thrown when the source is non-existent because the folder was renamed\n+ // by another node (shared FS) after we checked if the target exists\n+ logger.error(\"multiple nodes trying to upgrade [{}] in parallel, retry upgrading with single node\",\n+ exception, target);\n+ throw exception;\n+ } finally {\n+ if (success) {\n+ logger.info(\"{} moved from [{}] to [{}]\", index, source, target);\n+ logger.trace(\"{} syncing directory [{}]\", index, target);\n+ IOUtils.fsync(target, true);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Renames <code>indexFolderName</code> index folders found in node paths and custom path\n+ * iff {@link #needsUpgrade(Index, String)} is true.\n+ * Index folder in custom paths are renamed first followed by index folders in each node path.\n+ */\n+ void upgrade(final String indexFolderName) throws IOException {\n+ for (NodeEnvironment.NodePath nodePath : nodeEnv.nodePaths()) {\n+ final Path indexFolderPath = nodePath.indicesPath.resolve(indexFolderName);\n+ final IndexMetaData indexMetaData = indexStateFormat.loadLatestState(logger, indexFolderPath);\n+ if (indexMetaData != null) {\n+ final Index index = indexMetaData.getIndex();\n+ if (needsUpgrade(index, indexFolderName)) {\n+ logger.info(\"{} upgrading [{}] to new naming convention\", index, indexFolderPath);\n+ final IndexSettings indexSettings = new IndexSettings(indexMetaData, settings);\n+ if (indexSettings.hasCustomDataPath()) {\n+ // we rename index folder in custom path before renaming them in any node path\n+ // to have the index state under a not-yet-upgraded index folder, which we use to\n+ // continue renaming after a incomplete upgrade.\n+ final Path customLocationSource = nodeEnv.resolveBaseCustomLocation(indexSettings)\n+ .resolve(indexFolderName);\n+ final Path customLocationTarget = customLocationSource.resolveSibling(index.getUUID());\n+ // we rename the folder in custom path only the first time we encounter a state\n+ // in a node path, which needs upgrading, it is a no-op for subsequent node paths\n+ if (Files.exists(customLocationSource) // might not exist if no data was written for this index\n+ && Files.exists(customLocationTarget) == false) {\n+ upgrade(index, customLocationSource, customLocationTarget);\n+ } else {\n+ logger.info(\"[{}] no upgrade needed - already upgraded\", customLocationTarget);\n+ }\n+ }\n+ upgrade(index, indexFolderPath, indexFolderPath.resolveSibling(index.getUUID()));\n+ } else {\n+ logger.debug(\"[{}] no upgrade needed - already upgraded\", indexFolderPath);\n+ }\n+ } else {\n+ logger.warn(\"[{}] no index state found - ignoring\", indexFolderPath);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Upgrades all indices found under <code>nodeEnv</code>. Already upgraded indices are ignored.\n+ */\n+ public static void upgradeIndicesIfNeeded(final Settings settings, final NodeEnvironment nodeEnv) throws IOException {\n+ final IndexFolderUpgrader upgrader = new IndexFolderUpgrader(settings, nodeEnv);\n+ for (String indexFolderName : nodeEnv.availableIndexFolders()) {\n+ upgrader.upgrade(indexFolderName);\n+ }\n+ }\n+\n+ static boolean needsUpgrade(Index index, String indexFolderName) {\n+ return indexFolderName.equals(index.getUUID()) == false;\n+ }\n+\n+ static MetaDataStateFormat<IndexMetaData> readOnlyIndexMetaDataStateFormat() {\n+ // NOTE: XContentType param is not used as we use the format read from the serialized index state\n+ return new MetaDataStateFormat<IndexMetaData>(XContentType.SMILE, MetaStateService.INDEX_STATE_FILE_PREFIX) {\n+\n+ @Override\n+ public void toXContent(XContentBuilder builder, IndexMetaData state) throws IOException {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public IndexMetaData fromXContent(XContentParser parser) throws IOException {\n+ return IndexMetaData.Builder.fromXContent(parser);\n+ }\n+ };\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/common/util/IndexFolderUpgrader.java", "status": "added" }, { "diff": "@@ -70,7 +70,6 @@\n import java.util.concurrent.Semaphore;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n-import java.util.stream.Collectors;\n \n import static java.util.Collections.unmodifiableSet;\n \n@@ -89,7 +88,7 @@ public static class NodePath {\n * not running on Linux, or we hit an exception trying), True means the device possibly spins and False means it does not. */\n public final Boolean spins;\n \n- public NodePath(Path path, Environment environment) throws IOException {\n+ public NodePath(Path path) throws IOException {\n this.path = path;\n this.indicesPath = path.resolve(INDICES_FOLDER);\n this.fileStore = Environment.getFileStore(path);\n@@ -102,16 +101,18 @@ public NodePath(Path path, Environment environment) throws IOException {\n \n /**\n * Resolves the given shards directory against this NodePath\n+ * ${data.paths}/nodes/{node.id}/indices/{index.uuid}/{shard.id}\n */\n public Path resolve(ShardId shardId) {\n return resolve(shardId.getIndex()).resolve(Integer.toString(shardId.id()));\n }\n \n /**\n- * Resolves the given indexes directory against this NodePath\n+ * Resolves index directory against this NodePath\n+ * ${data.paths}/nodes/{node.id}/indices/{index.uuid}\n */\n public Path resolve(Index index) {\n- return indicesPath.resolve(index.getName());\n+ return indicesPath.resolve(index.getUUID());\n }\n \n @Override\n@@ -131,7 +132,7 @@ public String toString() {\n \n private final int localNodeId;\n private final AtomicBoolean closed = new AtomicBoolean(false);\n- private final Map<ShardLockKey, InternalShardLock> shardLocks = new HashMap<>();\n+ private final Map<ShardId, InternalShardLock> shardLocks = new HashMap<>();\n \n /**\n * Maximum number of data nodes that should run in an environment.\n@@ -186,7 +187,7 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce\n logger.trace(\"obtaining node lock on {} ...\", dir.toAbsolutePath());\n try {\n locks[dirIndex] = luceneDir.obtainLock(NODE_LOCK_FILENAME);\n- nodePaths[dirIndex] = new NodePath(dir, environment);\n+ nodePaths[dirIndex] = new NodePath(dir);\n localNodeId = possibleLockId;\n } catch (LockObtainFailedException ex) {\n logger.trace(\"failed to obtain node lock on {}\", dir.toAbsolutePath());\n@@ -445,11 +446,11 @@ public void deleteIndexDirectorySafe(Index index, long lockTimeoutMS, IndexSetti\n * @param indexSettings settings for the index being deleted\n */\n public void deleteIndexDirectoryUnderLock(Index index, IndexSettings indexSettings) throws IOException {\n- final Path[] indexPaths = indexPaths(index.getName());\n+ final Path[] indexPaths = indexPaths(index);\n logger.trace(\"deleting index {} directory, paths({}): [{}]\", index, indexPaths.length, indexPaths);\n IOUtils.rm(indexPaths);\n if (indexSettings.hasCustomDataPath()) {\n- Path customLocation = resolveCustomLocation(indexSettings, index.getName());\n+ Path customLocation = resolveIndexCustomLocation(indexSettings);\n logger.trace(\"deleting custom index {} directory [{}]\", index, customLocation);\n IOUtils.rm(customLocation);\n }\n@@ -517,17 +518,16 @@ public ShardLock shardLock(ShardId id) throws IOException {\n */\n public ShardLock shardLock(final ShardId shardId, long lockTimeoutMS) throws IOException {\n logger.trace(\"acquiring node shardlock on [{}], timeout [{}]\", shardId, lockTimeoutMS);\n- final ShardLockKey shardLockKey = new ShardLockKey(shardId);\n final InternalShardLock shardLock;\n final boolean acquired;\n synchronized (shardLocks) {\n- if (shardLocks.containsKey(shardLockKey)) {\n- shardLock = shardLocks.get(shardLockKey);\n+ if (shardLocks.containsKey(shardId)) {\n+ shardLock = shardLocks.get(shardId);\n shardLock.incWaitCount();\n acquired = false;\n } else {\n- shardLock = new InternalShardLock(shardLockKey);\n- shardLocks.put(shardLockKey, shardLock);\n+ shardLock = new InternalShardLock(shardId);\n+ shardLocks.put(shardId, shardLock);\n acquired = true;\n }\n }\n@@ -547,7 +547,7 @@ public ShardLock shardLock(final ShardId shardId, long lockTimeoutMS) throws IOE\n @Override\n protected void closeInternal() {\n shardLock.release();\n- logger.trace(\"released shard lock for [{}]\", shardLockKey);\n+ logger.trace(\"released shard lock for [{}]\", shardId);\n }\n };\n }\n@@ -559,51 +559,7 @@ protected void closeInternal() {\n */\n public Set<ShardId> lockedShards() {\n synchronized (shardLocks) {\n- Set<ShardId> lockedShards = shardLocks.keySet().stream()\n- .map(shardLockKey -> new ShardId(new Index(shardLockKey.indexName, \"_na_\"), shardLockKey.shardId)).collect(Collectors.toSet());\n- return unmodifiableSet(lockedShards);\n- }\n- }\n-\n- // a key for the shard lock. we can't use shardIds, because the contain\n- // the index uuid, but we want the lock semantics to the same as we map indices to disk folders, i.e., without the uuid (for now).\n- private final class ShardLockKey {\n- final String indexName;\n- final int shardId;\n-\n- public ShardLockKey(final ShardId shardId) {\n- this.indexName = shardId.getIndexName();\n- this.shardId = shardId.id();\n- }\n-\n- @Override\n- public String toString() {\n- return \"[\" + indexName + \"][\" + shardId + \"]\";\n- }\n-\n- @Override\n- public boolean equals(Object o) {\n- if (this == o) {\n- return true;\n- }\n- if (o == null || getClass() != o.getClass()) {\n- return false;\n- }\n-\n- ShardLockKey that = (ShardLockKey) o;\n-\n- if (shardId != that.shardId) {\n- return false;\n- }\n- return indexName.equals(that.indexName);\n-\n- }\n-\n- @Override\n- public int hashCode() {\n- int result = indexName.hashCode();\n- result = 31 * result + shardId;\n- return result;\n+ return unmodifiableSet(new HashSet<>(shardLocks.keySet()));\n }\n }\n \n@@ -616,10 +572,10 @@ private final class InternalShardLock {\n */\n private final Semaphore mutex = new Semaphore(1);\n private int waitCount = 1; // guarded by shardLocks\n- private final ShardLockKey lockKey;\n+ private final ShardId shardId;\n \n- InternalShardLock(ShardLockKey id) {\n- lockKey = id;\n+ InternalShardLock(ShardId shardId) {\n+ this.shardId = shardId;\n mutex.acquireUninterruptibly();\n }\n \n@@ -639,10 +595,10 @@ private void decWaitCount() {\n synchronized (shardLocks) {\n assert waitCount > 0 : \"waitCount is \" + waitCount + \" but should be > 0\";\n --waitCount;\n- logger.trace(\"shard lock wait count for [{}] is now [{}]\", lockKey, waitCount);\n+ logger.trace(\"shard lock wait count for {} is now [{}]\", shardId, waitCount);\n if (waitCount == 0) {\n- logger.trace(\"last shard lock wait decremented, removing lock for [{}]\", lockKey);\n- InternalShardLock remove = shardLocks.remove(lockKey);\n+ logger.trace(\"last shard lock wait decremented, removing lock for {}\", shardId);\n+ InternalShardLock remove = shardLocks.remove(shardId);\n assert remove != null : \"Removed lock was null\";\n }\n }\n@@ -651,11 +607,11 @@ private void decWaitCount() {\n void acquire(long timeoutInMillis) throws LockObtainFailedException{\n try {\n if (mutex.tryAcquire(timeoutInMillis, TimeUnit.MILLISECONDS) == false) {\n- throw new LockObtainFailedException(\"Can't lock shard \" + lockKey + \", timed out after \" + timeoutInMillis + \"ms\");\n+ throw new LockObtainFailedException(\"Can't lock shard \" + shardId + \", timed out after \" + timeoutInMillis + \"ms\");\n }\n } catch (InterruptedException e) {\n Thread.currentThread().interrupt();\n- throw new LockObtainFailedException(\"Can't lock shard \" + lockKey + \", interrupted\", e);\n+ throw new LockObtainFailedException(\"Can't lock shard \" + shardId + \", interrupted\", e);\n }\n }\n }\n@@ -698,11 +654,11 @@ public NodePath[] nodePaths() {\n /**\n * Returns all index paths.\n */\n- public Path[] indexPaths(String indexName) {\n+ public Path[] indexPaths(Index index) {\n assert assertEnvIsLocked();\n Path[] indexPaths = new Path[nodePaths.length];\n for (int i = 0; i < nodePaths.length; i++) {\n- indexPaths[i] = nodePaths[i].indicesPath.resolve(indexName);\n+ indexPaths[i] = nodePaths[i].resolve(index);\n }\n return indexPaths;\n }\n@@ -725,25 +681,47 @@ public Path[] availableShardPaths(ShardId shardId) {\n return shardLocations;\n }\n \n- public Set<String> findAllIndices() throws IOException {\n+ /**\n+ * Returns all folder names in ${data.paths}/nodes/{node.id}/indices folder\n+ */\n+ public Set<String> availableIndexFolders() throws IOException {\n if (nodePaths == null || locks == null) {\n throw new IllegalStateException(\"node is not configured to store local location\");\n }\n assert assertEnvIsLocked();\n- Set<String> indices = new HashSet<>();\n+ Set<String> indexFolders = new HashSet<>();\n for (NodePath nodePath : nodePaths) {\n Path indicesLocation = nodePath.indicesPath;\n if (Files.isDirectory(indicesLocation)) {\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(indicesLocation)) {\n for (Path index : stream) {\n if (Files.isDirectory(index)) {\n- indices.add(index.getFileName().toString());\n+ indexFolders.add(index.getFileName().toString());\n }\n }\n }\n }\n }\n- return indices;\n+ return indexFolders;\n+\n+ }\n+\n+ /**\n+ * Resolves all existing paths to <code>indexFolderName</code> in ${data.paths}/nodes/{node.id}/indices\n+ */\n+ public Path[] resolveIndexFolder(String indexFolderName) throws IOException {\n+ if (nodePaths == null || locks == null) {\n+ throw new IllegalStateException(\"node is not configured to store local location\");\n+ }\n+ assert assertEnvIsLocked();\n+ List<Path> paths = new ArrayList<>(nodePaths.length);\n+ for (NodePath nodePath : nodePaths) {\n+ Path indexFolder = nodePath.indicesPath.resolve(indexFolderName);\n+ if (Files.exists(indexFolder)) {\n+ paths.add(indexFolder);\n+ }\n+ }\n+ return paths.toArray(new Path[paths.size()]);\n }\n \n /**\n@@ -761,13 +739,13 @@ public Set<ShardId> findAllShardIds(final Index index) throws IOException {\n }\n assert assertEnvIsLocked();\n final Set<ShardId> shardIds = new HashSet<>();\n- String indexName = index.getName();\n+ final String indexUniquePathId = index.getUUID();\n for (final NodePath nodePath : nodePaths) {\n Path location = nodePath.indicesPath;\n if (Files.isDirectory(location)) {\n try (DirectoryStream<Path> indexStream = Files.newDirectoryStream(location)) {\n for (Path indexPath : indexStream) {\n- if (indexName.equals(indexPath.getFileName().toString())) {\n+ if (indexUniquePathId.equals(indexPath.getFileName().toString())) {\n shardIds.addAll(findAllShardsForIndex(indexPath, index));\n }\n }\n@@ -778,7 +756,7 @@ public Set<ShardId> findAllShardIds(final Index index) throws IOException {\n }\n \n private static Set<ShardId> findAllShardsForIndex(Path indexPath, Index index) throws IOException {\n- assert indexPath.getFileName().toString().equals(index.getName());\n+ assert indexPath.getFileName().toString().equals(index.getUUID());\n Set<ShardId> shardIds = new HashSet<>();\n if (Files.isDirectory(indexPath)) {\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(indexPath)) {\n@@ -861,7 +839,7 @@ Settings getSettings() { // for testing\n *\n * @param indexSettings settings for the index\n */\n- private Path resolveCustomLocation(IndexSettings indexSettings) {\n+ public Path resolveBaseCustomLocation(IndexSettings indexSettings) {\n String customDataDir = indexSettings.customDataPath();\n if (customDataDir != null) {\n // This assert is because this should be caught by MetaDataCreateIndexService\n@@ -882,10 +860,9 @@ private Path resolveCustomLocation(IndexSettings indexSettings) {\n * the root path for the index.\n *\n * @param indexSettings settings for the index\n- * @param indexName index to resolve the path for\n */\n- private Path resolveCustomLocation(IndexSettings indexSettings, final String indexName) {\n- return resolveCustomLocation(indexSettings).resolve(indexName);\n+ private Path resolveIndexCustomLocation(IndexSettings indexSettings) {\n+ return resolveBaseCustomLocation(indexSettings).resolve(indexSettings.getUUID());\n }\n \n /**\n@@ -897,7 +874,7 @@ private Path resolveCustomLocation(IndexSettings indexSettings, final String ind\n * @param shardId shard to resolve the path to\n */\n public Path resolveCustomLocation(IndexSettings indexSettings, final ShardId shardId) {\n- return resolveCustomLocation(indexSettings, shardId.getIndexName()).resolve(Integer.toString(shardId.id()));\n+ return resolveIndexCustomLocation(indexSettings).resolve(Integer.toString(shardId.id()));\n }\n \n /**\n@@ -921,22 +898,24 @@ private void assertCanWrite() throws IOException {\n for (Path path : nodeDataPaths()) { // check node-paths are writable\n tryWriteTempFile(path);\n }\n- for (String index : this.findAllIndices()) {\n- for (Path path : this.indexPaths(index)) { // check index paths are writable\n- Path statePath = path.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n- tryWriteTempFile(statePath);\n- tryWriteTempFile(path);\n- }\n- for (ShardId shardID : this.findAllShardIds(new Index(index, IndexMetaData.INDEX_UUID_NA_VALUE))) {\n- Path[] paths = this.availableShardPaths(shardID);\n- for (Path path : paths) { // check shard paths are writable\n- Path indexDir = path.resolve(ShardPath.INDEX_FOLDER_NAME);\n- Path statePath = path.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n- Path translogDir = path.resolve(ShardPath.TRANSLOG_FOLDER_NAME);\n- tryWriteTempFile(indexDir);\n- tryWriteTempFile(translogDir);\n- tryWriteTempFile(statePath);\n- tryWriteTempFile(path);\n+ for (String indexFolderName : this.availableIndexFolders()) {\n+ for (Path indexPath : this.resolveIndexFolder(indexFolderName)) { // check index paths are writable\n+ Path indexStatePath = indexPath.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ tryWriteTempFile(indexStatePath);\n+ tryWriteTempFile(indexPath);\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(indexPath)) {\n+ for (Path shardPath : stream) {\n+ String fileName = shardPath.getFileName().toString();\n+ if (Files.isDirectory(shardPath) && fileName.chars().allMatch(Character::isDigit)) {\n+ Path indexDir = shardPath.resolve(ShardPath.INDEX_FOLDER_NAME);\n+ Path statePath = shardPath.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ Path translogDir = shardPath.resolve(ShardPath.TRANSLOG_FOLDER_NAME);\n+ tryWriteTempFile(indexDir);\n+ tryWriteTempFile(translogDir);\n+ tryWriteTempFile(statePath);\n+ tryWriteTempFile(shardPath);\n+ }\n+ }\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java", "status": "modified" }, { "diff": "@@ -19,19 +19,25 @@\n \n package org.elasticsearch.gateway;\n \n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n \n+import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.List;\n import java.util.Map;\n import java.util.Set;\n+import java.util.stream.Collectors;\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.unmodifiableMap;\n@@ -47,7 +53,7 @@ public class DanglingIndicesState extends AbstractComponent {\n private final MetaStateService metaStateService;\n private final LocalAllocateDangledIndices allocateDangledIndices;\n \n- private final Map<String, IndexMetaData> danglingIndices = ConcurrentCollections.newConcurrentMap();\n+ private final Map<Index, IndexMetaData> danglingIndices = ConcurrentCollections.newConcurrentMap();\n \n @Inject\n public DanglingIndicesState(Settings settings, NodeEnvironment nodeEnv, MetaStateService metaStateService,\n@@ -74,7 +80,7 @@ public void processDanglingIndices(MetaData metaData) {\n /**\n * The current set of dangling indices.\n */\n- Map<String, IndexMetaData> getDanglingIndices() {\n+ Map<Index, IndexMetaData> getDanglingIndices() {\n // This might be a good use case for CopyOnWriteHashMap\n return unmodifiableMap(new HashMap<>(danglingIndices));\n }\n@@ -83,10 +89,16 @@ Map<String, IndexMetaData> getDanglingIndices() {\n * Cleans dangling indices if they are already allocated on the provided meta data.\n */\n void cleanupAllocatedDangledIndices(MetaData metaData) {\n- for (String danglingIndex : danglingIndices.keySet()) {\n- if (metaData.hasIndex(danglingIndex)) {\n- logger.debug(\"[{}] no longer dangling (created), removing from dangling list\", danglingIndex);\n- danglingIndices.remove(danglingIndex);\n+ for (Index index : danglingIndices.keySet()) {\n+ final IndexMetaData indexMetaData = metaData.index(index);\n+ if (indexMetaData != null && indexMetaData.getIndex().getName().equals(index.getName())) {\n+ if (indexMetaData.getIndex().getUUID().equals(index.getUUID()) == false) {\n+ logger.warn(\"[{}] can not be imported as a dangling index, as there is already another index \" +\n+ \"with the same name but a different uuid. local index will be ignored (but not deleted)\", index);\n+ } else {\n+ logger.debug(\"[{}] no longer dangling (created), removing from dangling list\", index);\n+ }\n+ danglingIndices.remove(index);\n }\n }\n }\n@@ -104,36 +116,30 @@ void findNewAndAddDanglingIndices(MetaData metaData) {\n * that have state on disk, but are not part of the provided meta data, or not detected\n * as dangled already.\n */\n- Map<String, IndexMetaData> findNewDanglingIndices(MetaData metaData) {\n- final Set<String> indices;\n- try {\n- indices = nodeEnv.findAllIndices();\n- } catch (Throwable e) {\n- logger.warn(\"failed to list dangling indices\", e);\n- return emptyMap();\n+ Map<Index, IndexMetaData> findNewDanglingIndices(MetaData metaData) {\n+ final Set<String> excludeIndexPathIds = new HashSet<>(metaData.indices().size() + danglingIndices.size());\n+ for (ObjectCursor<IndexMetaData> cursor : metaData.indices().values()) {\n+ excludeIndexPathIds.add(cursor.value.getIndex().getUUID());\n }\n-\n- Map<String, IndexMetaData> newIndices = new HashMap<>();\n- for (String indexName : indices) {\n- if (metaData.hasIndex(indexName) == false && danglingIndices.containsKey(indexName) == false) {\n- try {\n- IndexMetaData indexMetaData = metaStateService.loadIndexState(indexName);\n- if (indexMetaData != null) {\n- logger.info(\"[{}] dangling index, exists on local file system, but not in cluster metadata, auto import to cluster state\", indexName);\n- if (!indexMetaData.getIndex().getName().equals(indexName)) {\n- logger.info(\"dangled index directory name is [{}], state name is [{}], renaming to directory name\", indexName, indexMetaData.getIndex());\n- indexMetaData = IndexMetaData.builder(indexMetaData).index(indexName).build();\n- }\n- newIndices.put(indexName, indexMetaData);\n- } else {\n- logger.debug(\"[{}] dangling index directory detected, but no state found\", indexName);\n- }\n- } catch (Throwable t) {\n- logger.warn(\"[{}] failed to load index state for detected dangled index\", t, indexName);\n+ excludeIndexPathIds.addAll(danglingIndices.keySet().stream().map(Index::getUUID).collect(Collectors.toList()));\n+ try {\n+ final List<IndexMetaData> indexMetaDataList = metaStateService.loadIndicesStates(excludeIndexPathIds::contains);\n+ Map<Index, IndexMetaData> newIndices = new HashMap<>(indexMetaDataList.size());\n+ for (IndexMetaData indexMetaData : indexMetaDataList) {\n+ if (metaData.hasIndex(indexMetaData.getIndex().getName())) {\n+ logger.warn(\"[{}] can not be imported as a dangling index, as index with same name already exists in cluster metadata\",\n+ indexMetaData.getIndex());\n+ } else {\n+ logger.info(\"[{}] dangling index, exists on local file system, but not in cluster metadata, auto import to cluster state\",\n+ indexMetaData.getIndex());\n+ newIndices.put(indexMetaData.getIndex(), indexMetaData);\n }\n }\n+ return newIndices;\n+ } catch (IOException e) {\n+ logger.warn(\"failed to list dangling indices\", e);\n+ return emptyMap();\n }\n- return newIndices;\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/gateway/DanglingIndicesState.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.IndexFolderUpgrader;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.Index;\n \n@@ -86,6 +87,7 @@ public GatewayMetaState(Settings settings, NodeEnvironment nodeEnv, MetaStateSer\n try {\n ensureNoPre019State();\n pre20Upgrade();\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(settings, nodeEnv);\n long startNS = System.nanoTime();\n metaStateService.loadFullState();\n logger.debug(\"took {} to load state\", TimeValue.timeValueMillis(TimeValue.nsecToMSec(System.nanoTime() - startNS)));\n@@ -130,7 +132,7 @@ public void clusterChanged(ClusterChangedEvent event) {\n for (IndexMetaData indexMetaData : newMetaData) {\n IndexMetaData indexMetaDataOnDisk = null;\n if (indexMetaData.getState().equals(IndexMetaData.State.CLOSE)) {\n- indexMetaDataOnDisk = metaStateService.loadIndexState(indexMetaData.getIndex().getName());\n+ indexMetaDataOnDisk = metaStateService.loadIndexState(indexMetaData.getIndex());\n }\n if (indexMetaDataOnDisk != null) {\n newPreviouslyWrittenIndices.add(indexMetaDataOnDisk.getIndex());\n@@ -158,15 +160,14 @@ public void clusterChanged(ClusterChangedEvent event) {\n // check and write changes in indices\n for (IndexMetaWriteInfo indexMetaWrite : writeInfo) {\n try {\n- metaStateService.writeIndex(indexMetaWrite.reason, indexMetaWrite.newMetaData, indexMetaWrite.previousMetaData);\n+ metaStateService.writeIndex(indexMetaWrite.reason, indexMetaWrite.newMetaData);\n } catch (Throwable e) {\n success = false;\n }\n }\n }\n \n danglingIndicesState.processDanglingIndices(newMetaData);\n-\n if (success) {\n previousMetaData = newMetaData;\n previouslyWrittenIndices = unmodifiableSet(relevantIndices);\n@@ -233,7 +234,8 @@ private void pre20Upgrade() throws Exception {\n // We successfully checked all indices for backward compatibility and found no non-upgradable indices, which\n // means the upgrade can continue. Now it's safe to overwrite index metadata with the new version.\n for (IndexMetaData indexMetaData : updateIndexMetaData) {\n- metaStateService.writeIndex(\"upgrade\", indexMetaData, null);\n+ // since we still haven't upgraded the index folders, we write index state in the old folder\n+ metaStateService.writeIndex(\"upgrade\", indexMetaData, nodeEnv.resolveIndexFolder(indexMetaData.getIndex().getName()));\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java", "status": "modified" }, { "diff": "@@ -33,9 +33,12 @@\n import org.elasticsearch.index.Index;\n \n import java.io.IOException;\n+import java.nio.file.Path;\n+import java.util.ArrayList;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Map;\n-import java.util.Set;\n+import java.util.function.Predicate;\n \n /**\n * Handles writing and loading both {@link MetaData} and {@link IndexMetaData}\n@@ -45,7 +48,7 @@ public class MetaStateService extends AbstractComponent {\n static final String FORMAT_SETTING = \"gateway.format\";\n \n static final String GLOBAL_STATE_FILE_PREFIX = \"global-\";\n- private static final String INDEX_STATE_FILE_PREFIX = \"state-\";\n+ public static final String INDEX_STATE_FILE_PREFIX = \"state-\";\n \n private final NodeEnvironment nodeEnv;\n \n@@ -91,14 +94,12 @@ MetaData loadFullState() throws Exception {\n } else {\n metaDataBuilder = MetaData.builder();\n }\n-\n- final Set<String> indices = nodeEnv.findAllIndices();\n- for (String index : indices) {\n- IndexMetaData indexMetaData = loadIndexState(index);\n- if (indexMetaData == null) {\n- logger.debug(\"[{}] failed to find metadata for existing index location\", index);\n- } else {\n+ for (String indexFolderName : nodeEnv.availableIndexFolders()) {\n+ IndexMetaData indexMetaData = indexStateFormat.loadLatestState(logger, nodeEnv.resolveIndexFolder(indexFolderName));\n+ if (indexMetaData != null) {\n metaDataBuilder.put(indexMetaData, false);\n+ } else {\n+ logger.debug(\"[{}] failed to find metadata for existing index location\", indexFolderName);\n }\n }\n return metaDataBuilder.build();\n@@ -108,10 +109,35 @@ MetaData loadFullState() throws Exception {\n * Loads the index state for the provided index name, returning null if doesn't exists.\n */\n @Nullable\n- IndexMetaData loadIndexState(String index) throws IOException {\n+ IndexMetaData loadIndexState(Index index) throws IOException {\n return indexStateFormat.loadLatestState(logger, nodeEnv.indexPaths(index));\n }\n \n+ /**\n+ * Loads all indices states available on disk\n+ */\n+ List<IndexMetaData> loadIndicesStates(Predicate<String> excludeIndexPathIdsPredicate) throws IOException {\n+ List<IndexMetaData> indexMetaDataList = new ArrayList<>();\n+ for (String indexFolderName : nodeEnv.availableIndexFolders()) {\n+ if (excludeIndexPathIdsPredicate.test(indexFolderName)) {\n+ continue;\n+ }\n+ IndexMetaData indexMetaData = indexStateFormat.loadLatestState(logger,\n+ nodeEnv.resolveIndexFolder(indexFolderName));\n+ if (indexMetaData != null) {\n+ final String indexPathId = indexMetaData.getIndex().getUUID();\n+ if (indexFolderName.equals(indexPathId)) {\n+ indexMetaDataList.add(indexMetaData);\n+ } else {\n+ throw new IllegalStateException(\"[\" + indexFolderName+ \"] invalid index folder name, rename to [\" + indexPathId + \"]\");\n+ }\n+ } else {\n+ logger.debug(\"[{}] failed to find metadata for existing index location\", indexFolderName);\n+ }\n+ }\n+ return indexMetaDataList;\n+ }\n+\n /**\n * Loads the global state, *without* index state, see {@link #loadFullState()} for that.\n */\n@@ -129,13 +155,22 @@ MetaData loadGlobalState() throws IOException {\n /**\n * Writes the index state.\n */\n- void writeIndex(String reason, IndexMetaData indexMetaData, @Nullable IndexMetaData previousIndexMetaData) throws Exception {\n- logger.trace(\"[{}] writing state, reason [{}]\", indexMetaData.getIndex(), reason);\n+ void writeIndex(String reason, IndexMetaData indexMetaData) throws IOException {\n+ writeIndex(reason, indexMetaData, nodeEnv.indexPaths(indexMetaData.getIndex()));\n+ }\n+\n+ /**\n+ * Writes the index state in <code>locations</code>, use {@link #writeGlobalState(String, MetaData)}\n+ * to write index state in index paths\n+ */\n+ void writeIndex(String reason, IndexMetaData indexMetaData, Path[] locations) throws IOException {\n+ final Index index = indexMetaData.getIndex();\n+ logger.trace(\"[{}] writing state, reason [{}]\", index, reason);\n try {\n- indexStateFormat.write(indexMetaData, indexMetaData.getVersion(), nodeEnv.indexPaths(indexMetaData.getIndex().getName()));\n+ indexStateFormat.write(indexMetaData, indexMetaData.getVersion(), locations);\n } catch (Throwable ex) {\n- logger.warn(\"[{}]: failed to write index state\", ex, indexMetaData.getIndex());\n- throw new IOException(\"failed to write state for [\" + indexMetaData.getIndex() + \"]\", ex);\n+ logger.warn(\"[{}]: failed to write index state\", ex, index);\n+ throw new IOException(\"failed to write state for [\" + index + \"]\", ex);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/gateway/MetaStateService.java", "status": "modified" }, { "diff": "@@ -29,30 +29,27 @@\n import java.nio.file.FileStore;\n import java.nio.file.Files;\n import java.nio.file.Path;\n-import java.util.HashMap;\n import java.util.Map;\n \n public final class ShardPath {\n public static final String INDEX_FOLDER_NAME = \"index\";\n public static final String TRANSLOG_FOLDER_NAME = \"translog\";\n \n private final Path path;\n- private final String indexUUID;\n private final ShardId shardId;\n private final Path shardStatePath;\n private final boolean isCustomDataPath;\n \n- public ShardPath(boolean isCustomDataPath, Path dataPath, Path shardStatePath, String indexUUID, ShardId shardId) {\n+ public ShardPath(boolean isCustomDataPath, Path dataPath, Path shardStatePath, ShardId shardId) {\n assert dataPath.getFileName().toString().equals(Integer.toString(shardId.id())) : \"dataPath must end with the shard ID but didn't: \" + dataPath.toString();\n assert shardStatePath.getFileName().toString().equals(Integer.toString(shardId.id())) : \"shardStatePath must end with the shard ID but didn't: \" + dataPath.toString();\n- assert dataPath.getParent().getFileName().toString().equals(shardId.getIndexName()) : \"dataPath must end with index/shardID but didn't: \" + dataPath.toString();\n- assert shardStatePath.getParent().getFileName().toString().equals(shardId.getIndexName()) : \"shardStatePath must end with index/shardID but didn't: \" + dataPath.toString();\n+ assert dataPath.getParent().getFileName().toString().equals(shardId.getIndex().getUUID()) : \"dataPath must end with index path id but didn't: \" + dataPath.toString();\n+ assert shardStatePath.getParent().getFileName().toString().equals(shardId.getIndex().getUUID()) : \"shardStatePath must end with index path id but didn't: \" + dataPath.toString();\n if (isCustomDataPath && dataPath.equals(shardStatePath)) {\n throw new IllegalArgumentException(\"shard state path must be different to the data path when using custom data paths\");\n }\n this.isCustomDataPath = isCustomDataPath;\n this.path = dataPath;\n- this.indexUUID = indexUUID;\n this.shardId = shardId;\n this.shardStatePath = shardStatePath;\n }\n@@ -73,10 +70,6 @@ public boolean exists() {\n return Files.exists(path);\n }\n \n- public String getIndexUUID() {\n- return indexUUID;\n- }\n-\n public ShardId getShardId() {\n return shardId;\n }\n@@ -144,7 +137,7 @@ public static ShardPath loadShardPath(ESLogger logger, NodeEnvironment env, Shar\n dataPath = statePath;\n }\n logger.debug(\"{} loaded data path [{}], state path [{}]\", shardId, dataPath, statePath);\n- return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, indexUUID, shardId);\n+ return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, shardId);\n }\n }\n \n@@ -168,34 +161,6 @@ public static void deleteLeftoverShardDirectory(ESLogger logger, NodeEnvironment\n }\n }\n \n- /** Maps each path.data path to a \"guess\" of how many bytes the shards allocated to that path might additionally use over their\n- * lifetime; we do this so a bunch of newly allocated shards won't just all go the path with the most free space at this moment. */\n- private static Map<Path,Long> getEstimatedReservedBytes(NodeEnvironment env, long avgShardSizeInBytes, Iterable<IndexShard> shards) throws IOException {\n- long totFreeSpace = 0;\n- for (NodeEnvironment.NodePath nodePath : env.nodePaths()) {\n- totFreeSpace += nodePath.fileStore.getUsableSpace();\n- }\n-\n- // Very rough heuristic of how much disk space we expect the shard will use over its lifetime, the max of current average\n- // shard size across the cluster and 5% of the total available free space on this node:\n- long estShardSizeInBytes = Math.max(avgShardSizeInBytes, (long) (totFreeSpace/20.0));\n-\n- // Collate predicted (guessed!) disk usage on each path.data:\n- Map<Path,Long> reservedBytes = new HashMap<>();\n- for (IndexShard shard : shards) {\n- Path dataPath = NodeEnvironment.shardStatePathToDataPath(shard.shardPath().getShardStatePath());\n-\n- // Remove indices/<index>/<shardID> subdirs from the statePath to get back to the path.data/<lockID>:\n- Long curBytes = reservedBytes.get(dataPath);\n- if (curBytes == null) {\n- curBytes = 0L;\n- }\n- reservedBytes.put(dataPath, curBytes + estShardSizeInBytes);\n- } \n-\n- return reservedBytes;\n- }\n-\n public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shardId, IndexSettings indexSettings,\n long avgShardSizeInBytes, Map<Path,Integer> dataPathToShardCount) throws IOException {\n \n@@ -206,7 +171,6 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard\n dataPath = env.resolveCustomLocation(indexSettings, shardId);\n statePath = env.nodePaths()[0].resolve(shardId);\n } else {\n-\n long totFreeSpace = 0;\n for (NodeEnvironment.NodePath nodePath : env.nodePaths()) {\n totFreeSpace += nodePath.fileStore.getUsableSpace();\n@@ -241,9 +205,7 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard\n statePath = bestPath.resolve(shardId);\n dataPath = statePath;\n }\n-\n- final String indexUUID = indexSettings.getUUID();\n- return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, indexUUID, shardId);\n+ return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, shardId);\n }\n \n @Override\n@@ -258,9 +220,6 @@ public boolean equals(Object o) {\n if (shardId != null ? !shardId.equals(shardPath.shardId) : shardPath.shardId != null) {\n return false;\n }\n- if (indexUUID != null ? !indexUUID.equals(shardPath.indexUUID) : shardPath.indexUUID != null) {\n- return false;\n- }\n if (path != null ? !path.equals(shardPath.path) : shardPath.path != null) {\n return false;\n }\n@@ -271,7 +230,6 @@ public boolean equals(Object o) {\n @Override\n public int hashCode() {\n int result = path != null ? path.hashCode() : 0;\n- result = 31 * result + (indexUUID != null ? indexUUID.hashCode() : 0);\n result = 31 * result + (shardId != null ? shardId.hashCode() : 0);\n return result;\n }\n@@ -280,7 +238,6 @@ public int hashCode() {\n public String toString() {\n return \"ShardPath{\" +\n \"path=\" + path +\n- \", indexUUID='\" + indexUUID + '\\'' +\n \", shard=\" + shardId +\n '}';\n }", "filename": "core/src/main/java/org/elasticsearch/index/shard/ShardPath.java", "status": "modified" }, { "diff": "@@ -531,7 +531,7 @@ private void deleteIndexStore(String reason, Index index, IndexSettings indexSet\n }\n // this is a pure protection to make sure this index doesn't get re-imported as a dangling index.\n // we should in the future rather write a tombstone rather than wiping the metadata.\n- MetaDataStateFormat.deleteMetaState(nodeEnv.indexPaths(index.getName()));\n+ MetaDataStateFormat.deleteMetaState(nodeEnv.indexPaths(index));\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.IndexFolderUpgrader;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -105,6 +106,8 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n \n List<String> indexes;\n List<String> unsupportedIndexes;\n+ static String singleDataPathNodeName;\n+ static String multiDataPathNodeName;\n static Path singleDataPath;\n static Path[] multiDataPath;\n \n@@ -127,6 +130,8 @@ private List<String> loadIndexesList(String prefix) throws IOException {\n \n @AfterClass\n public static void tearDownStatics() {\n+ singleDataPathNodeName = null;\n+ multiDataPathNodeName = null;\n singleDataPath = null;\n multiDataPath = null;\n }\n@@ -157,15 +162,17 @@ void setupCluster() throws Exception {\n InternalTestCluster.Async<String> multiDataPathNode = internalCluster().startNodeAsync(nodeSettings.build());\n \n // find single data path dir\n- Path[] nodePaths = internalCluster().getInstance(NodeEnvironment.class, singleDataPathNode.get()).nodeDataPaths();\n+ singleDataPathNodeName = singleDataPathNode.get();\n+ Path[] nodePaths = internalCluster().getInstance(NodeEnvironment.class, singleDataPathNodeName).nodeDataPaths();\n assertEquals(1, nodePaths.length);\n singleDataPath = nodePaths[0].resolve(NodeEnvironment.INDICES_FOLDER);\n assertFalse(Files.exists(singleDataPath));\n Files.createDirectories(singleDataPath);\n logger.info(\"--> Single data path: {}\", singleDataPath);\n \n // find multi data path dirs\n- nodePaths = internalCluster().getInstance(NodeEnvironment.class, multiDataPathNode.get()).nodeDataPaths();\n+ multiDataPathNodeName = multiDataPathNode.get();\n+ nodePaths = internalCluster().getInstance(NodeEnvironment.class, multiDataPathNodeName).nodeDataPaths();\n assertEquals(2, nodePaths.length);\n multiDataPath = new Path[] {nodePaths[0].resolve(NodeEnvironment.INDICES_FOLDER),\n nodePaths[1].resolve(NodeEnvironment.INDICES_FOLDER)};\n@@ -178,6 +185,13 @@ void setupCluster() throws Exception {\n replicas.get(); // wait for replicas\n }\n \n+ void upgradeIndexFolder() throws Exception {\n+ final NodeEnvironment nodeEnvironment = internalCluster().getInstance(NodeEnvironment.class, singleDataPathNodeName);\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(Settings.EMPTY, nodeEnvironment);\n+ final NodeEnvironment nodeEnv = internalCluster().getInstance(NodeEnvironment.class, multiDataPathNodeName);\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(Settings.EMPTY, nodeEnv);\n+ }\n+\n String loadIndex(String indexFile) throws Exception {\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");\n@@ -296,6 +310,10 @@ public void testOldIndexes() throws Exception {\n void assertOldIndexWorks(String index) throws Exception {\n Version version = extractVersion(index);\n String indexName = loadIndex(index);\n+ // we explicitly upgrade the index folders as these indices\n+ // are imported as dangling indices and not available on\n+ // node startup\n+ upgradeIndexFolder();\n importIndex(indexName);\n assertIndexSanity(indexName, version);\n assertBasicSearchWorks(indexName);", "filename": "core/src/test/java/org/elasticsearch/bwcompat/OldIndexBackwardsCompatibilityIT.java", "status": "modified" }, { "diff": "@@ -92,22 +92,22 @@ public void testRandomDiskUsage() {\n }\n \n public void testFillShardLevelInfo() {\n- final Index index = new Index(\"test\", \"_na_\");\n+ final Index index = new Index(\"test\", \"0xdeadbeef\");\n ShardRouting test_0 = ShardRouting.newUnassigned(index, 0, null, false, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n ShardRoutingHelper.initialize(test_0, \"node1\");\n ShardRoutingHelper.moveToStarted(test_0);\n- Path test0Path = createTempDir().resolve(\"indices\").resolve(\"test\").resolve(\"0\");\n+ Path test0Path = createTempDir().resolve(\"indices\").resolve(index.getUUID()).resolve(\"0\");\n CommonStats commonStats0 = new CommonStats();\n commonStats0.store = new StoreStats(100, 1);\n ShardRouting test_1 = ShardRouting.newUnassigned(index, 1, null, false, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n ShardRoutingHelper.initialize(test_1, \"node2\");\n ShardRoutingHelper.moveToStarted(test_1);\n- Path test1Path = createTempDir().resolve(\"indices\").resolve(\"test\").resolve(\"1\");\n+ Path test1Path = createTempDir().resolve(\"indices\").resolve(index.getUUID()).resolve(\"1\");\n CommonStats commonStats1 = new CommonStats();\n commonStats1.store = new StoreStats(1000, 1);\n ShardStats[] stats = new ShardStats[] {\n- new ShardStats(test_0, new ShardPath(false, test0Path, test0Path, \"0xdeadbeef\", test_0.shardId()), commonStats0 , null),\n- new ShardStats(test_1, new ShardPath(false, test1Path, test1Path, \"0xdeadbeef\", test_1.shardId()), commonStats1 , null)\n+ new ShardStats(test_0, new ShardPath(false, test0Path, test0Path, test_0.shardId()), commonStats0 , null),\n+ new ShardStats(test_1, new ShardPath(false, test1Path, test1Path, test_1.shardId()), commonStats1 , null)\n };\n ImmutableOpenMap.Builder<String, Long> shardSizes = ImmutableOpenMap.builder();\n ImmutableOpenMap.Builder<ShardRouting, String> routingToPath = ImmutableOpenMap.builder();", "filename": "core/src/test/java/org/elasticsearch/cluster/DiskUsageTests.java", "status": "modified" }, { "diff": "@@ -22,8 +22,10 @@\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.health.ClusterHealthStatus;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n@@ -42,6 +44,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n@@ -226,9 +229,10 @@ private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exc\n assertThat(state.getRoutingNodes().node(state.nodes().resolveNode(node_1).id()).get(0).state(), equalTo(ShardRoutingState.STARTED));\n \n client().prepareIndex(\"test\", \"type\", \"1\").setSource(\"field\", \"value\").setRefresh(true).execute().actionGet();\n+ final Index index = resolveIndex(\"test\");\n \n logger.info(\"--> closing all nodes\");\n- Path[] shardLocation = internalCluster().getInstance(NodeEnvironment.class, node_1).availableShardPaths(new ShardId(\"test\", \"_na_\", 0));\n+ Path[] shardLocation = internalCluster().getInstance(NodeEnvironment.class, node_1).availableShardPaths(new ShardId(index, 0));\n assertThat(FileSystemUtils.exists(shardLocation), equalTo(true)); // make sure the data is there!\n internalCluster().closeNonSharedNodes(false); // don't wipe data directories the index needs to be there!\n ", "filename": "core/src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteIT.java", "status": "modified" }, { "diff": "@@ -0,0 +1,366 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util;\n+\n+import org.apache.lucene.util.CollectionUtil;\n+import org.apache.lucene.util.LuceneTestCase;\n+import org.apache.lucene.util.TestUtil;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.bwcompat.OldIndexBackwardsCompatibilityIT;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.routing.AllocationId;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.io.FileSystemUtils;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n+import org.elasticsearch.gateway.MetaStateService;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.shard.ShardPath;\n+import org.elasticsearch.index.shard.ShardStateMetaData;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.BufferedWriter;\n+import java.io.FileNotFoundException;\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.net.URISyntaxException;\n+import java.nio.charset.StandardCharsets;\n+import java.nio.file.DirectoryStream;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Locale;\n+import java.util.Map;\n+import java.util.Set;\n+\n+import static org.hamcrest.core.Is.is;\n+\n+@LuceneTestCase.SuppressFileSystems(\"ExtrasFS\")\n+public class IndexFolderUpgraderTests extends ESTestCase {\n+\n+ private static MetaDataStateFormat<IndexMetaData> indexMetaDataStateFormat =\n+ new MetaDataStateFormat<IndexMetaData>(XContentType.SMILE, MetaStateService.INDEX_STATE_FILE_PREFIX) {\n+\n+ @Override\n+ public void toXContent(XContentBuilder builder, IndexMetaData state) throws IOException {\n+ IndexMetaData.Builder.toXContent(state, builder, ToXContent.EMPTY_PARAMS);\n+ }\n+\n+ @Override\n+ public IndexMetaData fromXContent(XContentParser parser) throws IOException {\n+ return IndexMetaData.Builder.fromXContent(parser);\n+ }\n+ };\n+\n+ /**\n+ * tests custom data paths are upgraded\n+ */\n+ public void testUpgradeCustomDataPath() throws IOException {\n+ Path customPath = createTempDir();\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean())\n+ .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), customPath.toAbsolutePath().toString()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_DATA_PATH, customPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ int numIdxFiles = randomIntBetween(1, 5);\n+ int numTranslogFiles = randomIntBetween(1, 5);\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ writeIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ IndexFolderUpgrader helper = new IndexFolderUpgrader(settings, nodeEnv);\n+ helper.upgrade(indexSettings.getIndex().getName());\n+ checkIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ }\n+ }\n+\n+ /**\n+ * tests upgrade on partially upgraded index, when we crash while upgrading\n+ */\n+ public void testPartialUpgradeCustomDataPath() throws IOException {\n+ Path customPath = createTempDir();\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean())\n+ .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), customPath.toAbsolutePath().toString()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_DATA_PATH, customPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ int numIdxFiles = randomIntBetween(1, 5);\n+ int numTranslogFiles = randomIntBetween(1, 5);\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ writeIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ IndexFolderUpgrader helper = new IndexFolderUpgrader(settings, nodeEnv) {\n+ @Override\n+ void upgrade(Index index, Path source, Path target) throws IOException {\n+ if(randomBoolean()) {\n+ throw new FileNotFoundException(\"simulated\");\n+ }\n+ }\n+ };\n+ // only upgrade some paths\n+ try {\n+ helper.upgrade(index.getName());\n+ } catch (IOException e) {\n+ assertTrue(e instanceof FileNotFoundException);\n+ }\n+ helper = new IndexFolderUpgrader(settings, nodeEnv);\n+ // try to upgrade again\n+ helper.upgrade(indexSettings.getIndex().getName());\n+ checkIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ }\n+ }\n+\n+ public void testUpgrade() throws IOException {\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ int numIdxFiles = randomIntBetween(1, 5);\n+ int numTranslogFiles = randomIntBetween(1, 5);\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ writeIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ IndexFolderUpgrader helper = new IndexFolderUpgrader(settings, nodeEnv);\n+ helper.upgrade(indexSettings.getIndex().getName());\n+ checkIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ }\n+ }\n+\n+ public void testUpgradeIndices() throws IOException {\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ Map<IndexSettings, Tuple<Integer, Integer>> indexSettingsMap = new HashMap<>();\n+ for (int i = 0; i < randomIntBetween(2, 5); i++) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ Tuple<Integer, Integer> fileCounts = new Tuple<>(randomIntBetween(1, 5), randomIntBetween(1, 5));\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ indexSettingsMap.put(indexSettings, fileCounts);\n+ writeIndex(nodeEnv, indexSettings, fileCounts.v1(), fileCounts.v2());\n+ }\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(nodeSettings, nodeEnv);\n+ for (Map.Entry<IndexSettings, Tuple<Integer, Integer>> entry : indexSettingsMap.entrySet()) {\n+ checkIndex(nodeEnv, entry.getKey(), entry.getValue().v1(), entry.getValue().v2());\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Run upgrade on a real bwc index\n+ */\n+ public void testUpgradeRealIndex() throws IOException, URISyntaxException {\n+ List<Path> indexes = new ArrayList<>();\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(getBwcIndicesPath(), \"index-*.zip\")) {\n+ for (Path path : stream) {\n+ indexes.add(path);\n+ }\n+ }\n+ CollectionUtil.introSort(indexes, (o1, o2) -> o1.getFileName().compareTo(o2.getFileName()));\n+ final Path path = randomFrom(indexes);\n+ final String indexName = path.getFileName().toString().replace(\".zip\", \"\").toLowerCase(Locale.ROOT);\n+ try (NodeEnvironment nodeEnvironment = newNodeEnvironment()) {\n+ Path unzipDir = createTempDir();\n+ Path unzipDataDir = unzipDir.resolve(\"data\");\n+ // decompress the index\n+ try (InputStream stream = Files.newInputStream(path)) {\n+ TestUtil.unzip(stream, unzipDir);\n+ }\n+ // check it is unique\n+ assertTrue(Files.exists(unzipDataDir));\n+ Path[] list = FileSystemUtils.files(unzipDataDir);\n+ if (list.length != 1) {\n+ throw new IllegalStateException(\"Backwards index must contain exactly one cluster but was \" + list.length);\n+ }\n+ // the bwc scripts packs the indices under this path\n+ Path src = list[0].resolve(\"nodes/0/indices/\" + indexName);\n+ assertTrue(\"[\" + path + \"] missing index dir: \" + src.toString(), Files.exists(src));\n+ final Path indicesPath = randomFrom(nodeEnvironment.nodePaths()).indicesPath;\n+ logger.info(\"--> injecting index [{}] into [{}]\", indexName, indicesPath);\n+ OldIndexBackwardsCompatibilityIT.copyIndex(logger, src, indexName, indicesPath);\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(Settings.EMPTY, nodeEnvironment);\n+\n+ // ensure old index folder is deleted\n+ Set<String> indexFolders = nodeEnvironment.availableIndexFolders();\n+ assertEquals(indexFolders.size(), 1);\n+\n+ // ensure index metadata is moved\n+ IndexMetaData indexMetaData = indexMetaDataStateFormat.loadLatestState(logger,\n+ nodeEnvironment.resolveIndexFolder(indexFolders.iterator().next()));\n+ assertNotNull(indexMetaData);\n+ Index index = indexMetaData.getIndex();\n+ assertEquals(index.getName(), indexName);\n+\n+ Set<ShardId> shardIds = nodeEnvironment.findAllShardIds(index);\n+ // ensure all shards are moved\n+ assertEquals(shardIds.size(), indexMetaData.getNumberOfShards());\n+ for (ShardId shardId : shardIds) {\n+ final ShardPath shardPath = ShardPath.loadShardPath(logger, nodeEnvironment, shardId,\n+ new IndexSettings(indexMetaData, Settings.EMPTY));\n+ final Path translog = shardPath.resolveTranslog();\n+ final Path idx = shardPath.resolveIndex();\n+ final Path state = shardPath.getShardStatePath().resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ assertTrue(shardPath.exists());\n+ assertTrue(Files.exists(translog));\n+ assertTrue(Files.exists(idx));\n+ assertTrue(Files.exists(state));\n+ }\n+ }\n+ }\n+\n+ public void testNeedsUpgrade() throws IOException {\n+ final Index index = new Index(\"foo\", Strings.randomBase64UUID());\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName())\n+ .settings(Settings.builder()\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)\n+ .build();\n+ try (NodeEnvironment nodeEnvironment = newNodeEnvironment()) {\n+ indexMetaDataStateFormat.write(indexState, 1, nodeEnvironment.indexPaths(index));\n+ assertFalse(IndexFolderUpgrader.needsUpgrade(index, index.getUUID()));\n+ }\n+ }\n+\n+ private void checkIndex(NodeEnvironment nodeEnv, IndexSettings indexSettings,\n+ int numIdxFiles, int numTranslogFiles) throws IOException {\n+ final Index index = indexSettings.getIndex();\n+ // ensure index state can be loaded\n+ IndexMetaData loadLatestState = indexMetaDataStateFormat.loadLatestState(logger, nodeEnv.indexPaths(index));\n+ assertNotNull(loadLatestState);\n+ assertEquals(loadLatestState.getIndex(), index);\n+ for (int shardId = 0; shardId < indexSettings.getNumberOfShards(); shardId++) {\n+ // ensure shard path can be loaded\n+ ShardPath targetShardPath = ShardPath.loadShardPath(logger, nodeEnv, new ShardId(index, shardId), indexSettings);\n+ assertNotNull(targetShardPath);\n+ // ensure shard contents are copied over\n+ final Path translog = targetShardPath.resolveTranslog();\n+ final Path idx = targetShardPath.resolveIndex();\n+\n+ // ensure index and translog files are copied over\n+ assertEquals(numTranslogFiles, FileSystemUtils.files(translog).length);\n+ assertEquals(numIdxFiles, FileSystemUtils.files(idx).length);\n+ Path[] files = FileSystemUtils.files(translog);\n+ final HashSet<Path> translogFiles = new HashSet<>(Arrays.asList(files));\n+ for (int i = 0; i < numTranslogFiles; i++) {\n+ final String name = Integer.toString(i);\n+ translogFiles.contains(translog.resolve(name + \".translog\"));\n+ byte[] content = Files.readAllBytes(translog.resolve(name + \".translog\"));\n+ assertEquals(name , new String(content, StandardCharsets.UTF_8));\n+ }\n+ Path[] indexFileList = FileSystemUtils.files(idx);\n+ final HashSet<Path> idxFiles = new HashSet<>(Arrays.asList(indexFileList));\n+ for (int i = 0; i < numIdxFiles; i++) {\n+ final String name = Integer.toString(i);\n+ idxFiles.contains(idx.resolve(name + \".tst\"));\n+ byte[] content = Files.readAllBytes(idx.resolve(name + \".tst\"));\n+ assertEquals(name, new String(content, StandardCharsets.UTF_8));\n+ }\n+ }\n+ }\n+\n+ private void writeIndex(NodeEnvironment nodeEnv, IndexSettings indexSettings,\n+ int numIdxFiles, int numTranslogFiles) throws IOException {\n+ NodeEnvironment.NodePath[] nodePaths = nodeEnv.nodePaths();\n+ Path[] oldIndexPaths = new Path[nodePaths.length];\n+ for (int i = 0; i < nodePaths.length; i++) {\n+ oldIndexPaths[i] = nodePaths[i].indicesPath.resolve(indexSettings.getIndex().getName());\n+ }\n+ indexMetaDataStateFormat.write(indexSettings.getIndexMetaData(), 1, oldIndexPaths);\n+ for (int id = 0; id < indexSettings.getNumberOfShards(); id++) {\n+ Path oldIndexPath = randomFrom(oldIndexPaths);\n+ ShardId shardId = new ShardId(indexSettings.getIndex(), id);\n+ if (indexSettings.hasCustomDataPath()) {\n+ Path customIndexPath = nodeEnv.resolveBaseCustomLocation(indexSettings).resolve(indexSettings.getIndex().getName());\n+ writeShard(shardId, customIndexPath, numIdxFiles, numTranslogFiles);\n+ } else {\n+ writeShard(shardId, oldIndexPath, numIdxFiles, numTranslogFiles);\n+ }\n+ ShardStateMetaData state = new ShardStateMetaData(true, indexSettings.getUUID(), AllocationId.newInitializing());\n+ ShardStateMetaData.FORMAT.write(state, 1, oldIndexPath.resolve(String.valueOf(shardId.getId())));\n+ }\n+ }\n+\n+ private void writeShard(ShardId shardId, Path indexLocation,\n+ final int numIdxFiles, final int numTranslogFiles) throws IOException {\n+ Path oldShardDataPath = indexLocation.resolve(String.valueOf(shardId.getId()));\n+ final Path translogPath = oldShardDataPath.resolve(ShardPath.TRANSLOG_FOLDER_NAME);\n+ final Path idxPath = oldShardDataPath.resolve(ShardPath.INDEX_FOLDER_NAME);\n+ Files.createDirectories(translogPath);\n+ Files.createDirectories(idxPath);\n+ for (int i = 0; i < numIdxFiles; i++) {\n+ String filename = Integer.toString(i);\n+ try (BufferedWriter w = Files.newBufferedWriter(idxPath.resolve(filename + \".tst\"),\n+ StandardCharsets.UTF_8)) {\n+ w.write(filename);\n+ }\n+ }\n+ for (int i = 0; i < numTranslogFiles; i++) {\n+ String filename = Integer.toString(i);\n+ try (BufferedWriter w = Files.newBufferedWriter(translogPath.resolve(filename + \".translog\"),\n+ StandardCharsets.UTF_8)) {\n+ w.write(filename);\n+ }\n+ }\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/common/util/IndexFolderUpgraderTests.java", "status": "added" }, { "diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.shard.ShardId;\n@@ -36,7 +37,11 @@\n import java.io.IOException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n+import java.util.ArrayList;\n+import java.util.HashMap;\n+import java.util.HashSet;\n import java.util.List;\n+import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -129,33 +134,34 @@ public void testNodeLockMultipleEnvironment() throws IOException {\n public void testShardLock() throws IOException {\n final NodeEnvironment env = newNodeEnvironment();\n \n- ShardLock fooLock = env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n- assertEquals(new ShardId(\"foo\", \"_na_\", 0), fooLock.getShardId());\n+ Index index = new Index(\"foo\", \"fooUUID\");\n+ ShardLock fooLock = env.shardLock(new ShardId(index, 0));\n+ assertEquals(new ShardId(index, 0), fooLock.getShardId());\n \n try {\n- env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n+ env.shardLock(new ShardId(index, 0));\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n Files.createDirectories(path.resolve(\"0\"));\n Files.createDirectories(path.resolve(\"1\"));\n }\n try {\n- env.lockAllForIndex(new Index(\"foo\", \"_na_\"), idxSettings, randomIntBetween(0, 10));\n+ env.lockAllForIndex(index, idxSettings, randomIntBetween(0, 10));\n fail(\"shard 0 is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n \n fooLock.close();\n // can lock again?\n- env.shardLock(new ShardId(\"foo\", \"_na_\", 0)).close();\n+ env.shardLock(new ShardId(index, 0)).close();\n \n- List<ShardLock> locks = env.lockAllForIndex(new Index(\"foo\", \"_na_\"), idxSettings, randomIntBetween(0, 10));\n+ List<ShardLock> locks = env.lockAllForIndex(index, idxSettings, randomIntBetween(0, 10));\n try {\n- env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n+ env.shardLock(new ShardId(index, 0));\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n@@ -165,63 +171,91 @@ public void testShardLock() throws IOException {\n env.close();\n }\n \n- public void testGetAllIndices() throws Exception {\n+ public void testAvailableIndexFolders() throws Exception {\n final NodeEnvironment env = newNodeEnvironment();\n final int numIndices = randomIntBetween(1, 10);\n+ Set<String> actualPaths = new HashSet<>();\n for (int i = 0; i < numIndices; i++) {\n- for (Path path : env.indexPaths(\"foo\" + i)) {\n- Files.createDirectories(path);\n+ Index index = new Index(\"foo\" + i, \"fooUUID\" + i);\n+ for (Path path : env.indexPaths(index)) {\n+ Files.createDirectories(path.resolve(MetaDataStateFormat.STATE_DIR_NAME));\n+ actualPaths.add(path.getFileName().toString());\n }\n }\n- Set<String> indices = env.findAllIndices();\n- assertEquals(indices.size(), numIndices);\n+\n+ assertThat(actualPaths, equalTo(env.availableIndexFolders()));\n+ assertTrue(\"LockedShards: \" + env.lockedShards(), env.lockedShards().isEmpty());\n+ env.close();\n+ }\n+\n+ public void testResolveIndexFolders() throws Exception {\n+ final NodeEnvironment env = newNodeEnvironment();\n+ final int numIndices = randomIntBetween(1, 10);\n+ Map<String, List<Path>> actualIndexDataPaths = new HashMap<>();\n for (int i = 0; i < numIndices; i++) {\n- assertTrue(indices.contains(\"foo\" + i));\n+ Index index = new Index(\"foo\" + i, \"fooUUID\" + i);\n+ Path[] indexPaths = env.indexPaths(index);\n+ for (Path path : indexPaths) {\n+ Files.createDirectories(path);\n+ String fileName = path.getFileName().toString();\n+ List<Path> paths = actualIndexDataPaths.get(fileName);\n+ if (paths == null) {\n+ paths = new ArrayList<>();\n+ }\n+ paths.add(path);\n+ actualIndexDataPaths.put(fileName, paths);\n+ }\n+ }\n+ for (Map.Entry<String, List<Path>> actualIndexDataPathEntry : actualIndexDataPaths.entrySet()) {\n+ List<Path> actual = actualIndexDataPathEntry.getValue();\n+ Path[] actualPaths = actual.toArray(new Path[actual.size()]);\n+ assertThat(actualPaths, equalTo(env.resolveIndexFolder(actualIndexDataPathEntry.getKey())));\n }\n assertTrue(\"LockedShards: \" + env.lockedShards(), env.lockedShards().isEmpty());\n env.close();\n }\n \n public void testDeleteSafe() throws IOException, InterruptedException {\n final NodeEnvironment env = newNodeEnvironment();\n- ShardLock fooLock = env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n- assertEquals(new ShardId(\"foo\", \"_na_\", 0), fooLock.getShardId());\n+ final Index index = new Index(\"foo\", \"fooUUID\");\n+ ShardLock fooLock = env.shardLock(new ShardId(index, 0));\n+ assertEquals(new ShardId(index, 0), fooLock.getShardId());\n \n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n Files.createDirectories(path.resolve(\"0\"));\n Files.createDirectories(path.resolve(\"1\"));\n }\n \n try {\n- env.deleteShardDirectorySafe(new ShardId(\"foo\", \"_na_\", 0), idxSettings);\n+ env.deleteShardDirectorySafe(new ShardId(index, 0), idxSettings);\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertTrue(Files.exists(path.resolve(\"0\")));\n assertTrue(Files.exists(path.resolve(\"1\")));\n \n }\n \n- env.deleteShardDirectorySafe(new ShardId(\"foo\", \"_na_\", 1), idxSettings);\n+ env.deleteShardDirectorySafe(new ShardId(index, 1), idxSettings);\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertTrue(Files.exists(path.resolve(\"0\")));\n assertFalse(Files.exists(path.resolve(\"1\")));\n }\n \n try {\n- env.deleteIndexDirectorySafe(new Index(\"foo\", \"_na_\"), randomIntBetween(0, 10), idxSettings);\n+ env.deleteIndexDirectorySafe(index, randomIntBetween(0, 10), idxSettings);\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n fooLock.close();\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertTrue(Files.exists(path));\n }\n \n@@ -242,7 +276,7 @@ public void onFailure(Throwable t) {\n @Override\n protected void doRun() throws Exception {\n start.await();\n- try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", \"_na_\", 0))) {\n+ try (ShardLock autoCloses = env.shardLock(new ShardId(index, 0))) {\n blockLatch.countDown();\n Thread.sleep(randomIntBetween(1, 10));\n }\n@@ -257,11 +291,11 @@ protected void doRun() throws Exception {\n start.countDown();\n blockLatch.await();\n \n- env.deleteIndexDirectorySafe(new Index(\"foo\", \"_na_\"), 5000, idxSettings);\n+ env.deleteIndexDirectorySafe(index, 5000, idxSettings);\n \n assertNull(threadException.get());\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertFalse(Files.exists(path));\n }\n latch.await();\n@@ -300,7 +334,7 @@ public void run() {\n for (int i = 0; i < iters; i++) {\n int shard = randomIntBetween(0, counts.length - 1);\n try {\n- try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", \"_na_\", shard), scaledRandomIntBetween(0, 10))) {\n+ try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", \"fooUUID\", shard), scaledRandomIntBetween(0, 10))) {\n counts[shard].value++;\n countsAtomic[shard].incrementAndGet();\n assertEquals(flipFlop[shard].incrementAndGet(), 1);\n@@ -334,37 +368,38 @@ public void testCustomDataPaths() throws Exception {\n String[] dataPaths = tmpPaths();\n NodeEnvironment env = newNodeEnvironment(dataPaths, \"/tmp\", Settings.EMPTY);\n \n- IndexSettings s1 = IndexSettingsModule.newIndexSettings(\"myindex\", Settings.EMPTY);\n- IndexSettings s2 = IndexSettingsModule.newIndexSettings(\"myindex\", Settings.builder().put(IndexMetaData.SETTING_DATA_PATH, \"/tmp/foo\").build());\n- Index index = new Index(\"myindex\", \"_na_\");\n+ final Settings indexSettings = Settings.builder().put(IndexMetaData.SETTING_INDEX_UUID, \"myindexUUID\").build();\n+ IndexSettings s1 = IndexSettingsModule.newIndexSettings(\"myindex\", indexSettings);\n+ IndexSettings s2 = IndexSettingsModule.newIndexSettings(\"myindex\", Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_DATA_PATH, \"/tmp/foo\").build());\n+ Index index = new Index(\"myindex\", \"myindexUUID\");\n ShardId sid = new ShardId(index, 0);\n \n assertFalse(\"no settings should mean no custom data path\", s1.hasCustomDataPath());\n assertTrue(\"settings with path_data should have a custom data path\", s2.hasCustomDataPath());\n \n assertThat(env.availableShardPaths(sid), equalTo(env.availableShardPaths(sid)));\n- assertThat(env.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/0/myindex/0\")));\n+ assertThat(env.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/0/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"shard paths with a custom data_path should contain only regular paths\",\n env.availableShardPaths(sid),\n- equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex/0\")));\n+ equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"index paths uses the regular template\",\n- env.indexPaths(index.getName()), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex\")));\n+ env.indexPaths(index), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID())));\n \n env.close();\n NodeEnvironment env2 = newNodeEnvironment(dataPaths, \"/tmp\",\n Settings.builder().put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), false).build());\n \n assertThat(env2.availableShardPaths(sid), equalTo(env2.availableShardPaths(sid)));\n- assertThat(env2.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/myindex/0\")));\n+ assertThat(env2.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"shard paths with a custom data_path should contain only regular paths\",\n env2.availableShardPaths(sid),\n- equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex/0\")));\n+ equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"index paths uses the regular template\",\n- env2.indexPaths(index.getName()), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex\")));\n+ env2.indexPaths(index), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID())));\n \n env2.close();\n }", "filename": "core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n \n import java.nio.file.Files;\n import java.nio.file.Path;\n+import java.nio.file.StandardCopyOption;\n import java.util.Map;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -53,6 +54,47 @@ public void testCleanupWhenEmpty() throws Exception {\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n }\n+ public void testDanglingIndicesDiscovery() throws Exception {\n+ try (NodeEnvironment env = newNodeEnvironment()) {\n+ MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n+ DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n+\n+ assertTrue(danglingState.getDanglingIndices().isEmpty());\n+ MetaData metaData = MetaData.builder().build();\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, \"test1UUID\");\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n+ Map<Index, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ assertTrue(newDanglingIndices.containsKey(dangledIndex.getIndex()));\n+ metaData = MetaData.builder().put(dangledIndex, false).build();\n+ newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ assertFalse(newDanglingIndices.containsKey(dangledIndex.getIndex()));\n+ }\n+ }\n+\n+ public void testInvalidIndexFolder() throws Exception {\n+ try (NodeEnvironment env = newNodeEnvironment()) {\n+ MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n+ DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n+\n+ MetaData metaData = MetaData.builder().build();\n+ final String uuid = \"test1UUID\";\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, uuid);\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n+ for (Path path : env.resolveIndexFolder(uuid)) {\n+ if (Files.exists(path)) {\n+ Files.move(path, path.resolveSibling(\"invalidUUID\"), StandardCopyOption.ATOMIC_MOVE);\n+ }\n+ }\n+ try {\n+ danglingState.findNewDanglingIndices(metaData);\n+ fail(\"no exception thrown for invalid folder name\");\n+ } catch (IllegalStateException e) {\n+ assertThat(e.getMessage(), equalTo(\"[invalidUUID] invalid index folder name, rename to [test1UUID]\"));\n+ }\n+ }\n+ }\n \n public void testDanglingProcessing() throws Exception {\n try (NodeEnvironment env = newNodeEnvironment()) {\n@@ -61,59 +103,40 @@ public void testDanglingProcessing() throws Exception {\n \n MetaData metaData = MetaData.builder().build();\n \n- IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(indexSettings).build();\n- metaStateService.writeIndex(\"test_write\", dangledIndex, null);\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, \"test1UUID\");\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n \n // check that several runs when not in the metadata still keep the dangled index around\n int numberOfChecks = randomIntBetween(1, 10);\n for (int i = 0; i < numberOfChecks; i++) {\n- Map<String, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ Map<Index, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n assertThat(newDanglingIndices.size(), equalTo(1));\n- assertThat(newDanglingIndices.keySet(), Matchers.hasItems(\"test1\"));\n+ assertThat(newDanglingIndices.keySet(), Matchers.hasItems(dangledIndex.getIndex()));\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n \n for (int i = 0; i < numberOfChecks; i++) {\n danglingState.findNewAndAddDanglingIndices(metaData);\n \n assertThat(danglingState.getDanglingIndices().size(), equalTo(1));\n- assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(\"test1\"));\n+ assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(dangledIndex.getIndex()));\n }\n \n // simulate allocation to the metadata\n metaData = MetaData.builder(metaData).put(dangledIndex, true).build();\n \n // check that several runs when in the metadata, but not cleaned yet, still keeps dangled\n for (int i = 0; i < numberOfChecks; i++) {\n- Map<String, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ Map<Index, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n assertTrue(newDanglingIndices.isEmpty());\n \n assertThat(danglingState.getDanglingIndices().size(), equalTo(1));\n- assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(\"test1\"));\n+ assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(dangledIndex.getIndex()));\n }\n \n danglingState.cleanupAllocatedDangledIndices(metaData);\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n }\n-\n- public void testRenameOfIndexState() throws Exception {\n- try (NodeEnvironment env = newNodeEnvironment()) {\n- MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n- DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n-\n- MetaData metaData = MetaData.builder().build();\n-\n- IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(indexSettings).build();\n- metaStateService.writeIndex(\"test_write\", dangledIndex, null);\n-\n- for (Path path : env.indexPaths(\"test1\")) {\n- Files.move(path, path.getParent().resolve(\"test1_renamed\"));\n- }\n-\n- Map<String, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n- assertThat(newDanglingIndices.size(), equalTo(1));\n- assertThat(newDanglingIndices.keySet(), Matchers.hasItems(\"test1_renamed\"));\n- }\n- }\n }", "filename": "core/src/test/java/org/elasticsearch/gateway/DanglingIndicesStateTests.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n@@ -68,14 +69,15 @@ public void testMetaIsRemovedIfAllShardsFromIndexRemoved() throws Exception {\n index(index, \"doc\", \"1\", jsonBuilder().startObject().field(\"text\", \"some text\").endObject());\n ensureGreen();\n assertIndexInMetaState(node1, index);\n- assertIndexDirectoryDeleted(node2, index);\n+ Index resolveIndex = resolveIndex(index);\n+ assertIndexDirectoryDeleted(node2, resolveIndex);\n assertIndexInMetaState(masterNode, index);\n \n logger.debug(\"relocating index...\");\n client().admin().indices().prepareUpdateSettings(index).setSettings(Settings.builder().put(IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING.getKey() + \"_name\", node2)).get();\n client().admin().cluster().prepareHealth().setWaitForRelocatingShards(0).get();\n ensureGreen();\n- assertIndexDirectoryDeleted(node1, index);\n+ assertIndexDirectoryDeleted(node1, resolveIndex);\n assertIndexInMetaState(node2, index);\n assertIndexInMetaState(masterNode, index);\n }\n@@ -146,10 +148,10 @@ public void testMetaWrittenWhenIndexIsClosedAndMetaUpdated() throws Exception {\n assertThat(indicesMetaData.get(index).getState(), equalTo(IndexMetaData.State.OPEN));\n }\n \n- protected void assertIndexDirectoryDeleted(final String nodeName, final String indexName) throws Exception {\n+ protected void assertIndexDirectoryDeleted(final String nodeName, final Index index) throws Exception {\n assertBusy(() -> {\n logger.info(\"checking if index directory exists...\");\n- assertFalse(\"Expecting index directory of \" + indexName + \" to be deleted from node \" + nodeName, indexDirectoryExists(nodeName, indexName));\n+ assertFalse(\"Expecting index directory of \" + index + \" to be deleted from node \" + nodeName, indexDirectoryExists(nodeName, index));\n }\n );\n }\n@@ -168,9 +170,9 @@ protected void assertIndexInMetaState(final String nodeName, final String indexN\n }\n \n \n- private boolean indexDirectoryExists(String nodeName, String indexName) {\n+ private boolean indexDirectoryExists(String nodeName, Index index) {\n NodeEnvironment nodeEnv = ((InternalTestCluster) cluster()).getInstance(NodeEnvironment.class, nodeName);\n- for (Path path : nodeEnv.indexPaths(indexName)) {\n+ for (Path path : nodeEnv.indexPaths(index)) {\n if (Files.exists(path)) {\n return true;\n }", "filename": "core/src/test/java/org/elasticsearch/gateway/MetaDataWriteDataNodesIT.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.test.ESTestCase;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -43,15 +44,15 @@ public void testWriteLoadIndex() throws Exception {\n MetaStateService metaStateService = new MetaStateService(randomSettings(), env);\n \n IndexMetaData index = IndexMetaData.builder(\"test1\").settings(indexSettings).build();\n- metaStateService.writeIndex(\"test_write\", index, null);\n- assertThat(metaStateService.loadIndexState(\"test1\"), equalTo(index));\n+ metaStateService.writeIndex(\"test_write\", index);\n+ assertThat(metaStateService.loadIndexState(index.getIndex()), equalTo(index));\n }\n }\n \n public void testLoadMissingIndex() throws Exception {\n try (NodeEnvironment env = newNodeEnvironment()) {\n MetaStateService metaStateService = new MetaStateService(randomSettings(), env);\n- assertThat(metaStateService.loadIndexState(\"test1\"), nullValue());\n+ assertThat(metaStateService.loadIndexState(new Index(\"test1\", \"test1UUID\")), nullValue());\n }\n }\n \n@@ -94,7 +95,7 @@ public void testLoadGlobal() throws Exception {\n .build();\n \n metaStateService.writeGlobalState(\"test_write\", metaData);\n- metaStateService.writeIndex(\"test_write\", index, null);\n+ metaStateService.writeIndex(\"test_write\", index);\n \n MetaData loadedState = metaStateService.loadFullState();\n assertThat(loadedState.persistentSettings(), equalTo(metaData.persistentSettings()));", "filename": "core/src/test/java/org/elasticsearch/gateway/MetaStateServiceTests.java", "status": "modified" }, { "diff": "@@ -70,6 +70,7 @@\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.env.ShardLock;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.NodeServicesProvider;\n@@ -97,6 +98,7 @@\n import org.elasticsearch.test.IndexSettingsModule;\n import org.elasticsearch.test.InternalSettingsPlugin;\n import org.elasticsearch.test.VersionUtils;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n \n import java.io.IOException;\n import java.nio.file.Files;\n@@ -141,33 +143,35 @@ protected Collection<Class<? extends Plugin>> getPlugins() {\n \n public void testWriteShardState() throws Exception {\n try (NodeEnvironment env = newNodeEnvironment()) {\n- ShardId id = new ShardId(\"foo\", \"_na_\", 1);\n+ ShardId id = new ShardId(\"foo\", \"fooUUID\", 1);\n long version = between(1, Integer.MAX_VALUE / 2);\n boolean primary = randomBoolean();\n AllocationId allocationId = randomBoolean() ? null : randomAllocationId();\n- ShardStateMetaData state1 = new ShardStateMetaData(version, primary, \"foo\", allocationId);\n+ ShardStateMetaData state1 = new ShardStateMetaData(version, primary, \"fooUUID\", allocationId);\n write(state1, env.availableShardPaths(id));\n ShardStateMetaData shardStateMetaData = load(logger, env.availableShardPaths(id));\n assertEquals(shardStateMetaData, state1);\n \n- ShardStateMetaData state2 = new ShardStateMetaData(version, primary, \"foo\", allocationId);\n+ ShardStateMetaData state2 = new ShardStateMetaData(version, primary, \"fooUUID\", allocationId);\n write(state2, env.availableShardPaths(id));\n shardStateMetaData = load(logger, env.availableShardPaths(id));\n assertEquals(shardStateMetaData, state1);\n \n- ShardStateMetaData state3 = new ShardStateMetaData(version + 1, primary, \"foo\", allocationId);\n+ ShardStateMetaData state3 = new ShardStateMetaData(version + 1, primary, \"fooUUID\", allocationId);\n write(state3, env.availableShardPaths(id));\n shardStateMetaData = load(logger, env.availableShardPaths(id));\n assertEquals(shardStateMetaData, state3);\n- assertEquals(\"foo\", state3.indexUUID);\n+ assertEquals(\"fooUUID\", state3.indexUUID);\n }\n }\n \n public void testLockTryingToDelete() throws Exception {\n createIndex(\"test\");\n ensureGreen();\n NodeEnvironment env = getInstanceFromNode(NodeEnvironment.class);\n- Path[] shardPaths = env.availableShardPaths(new ShardId(\"test\", \"_na_\", 0));\n+ ClusterService cs = getInstanceFromNode(ClusterService.class);\n+ final Index index = cs.state().metaData().index(\"test\").getIndex();\n+ Path[] shardPaths = env.availableShardPaths(new ShardId(index, 0));\n logger.info(\"--> paths: [{}]\", (Object)shardPaths);\n // Should not be able to acquire the lock because it's already open\n try {\n@@ -179,7 +183,7 @@ public void testLockTryingToDelete() throws Exception {\n // Test without the regular shard lock to assume we can acquire it\n // (worst case, meaning that the shard lock could be acquired and\n // we're green to delete the shard's directory)\n- ShardLock sLock = new DummyShardLock(new ShardId(\"test\", \"_na_\", 0));\n+ ShardLock sLock = new DummyShardLock(new ShardId(index, 0));\n try {\n env.deleteShardDirectoryUnderLock(sLock, IndexSettingsModule.newIndexSettings(\"test\", Settings.EMPTY));\n fail(\"should not have been able to delete the directory\");", "filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.IndexSettingsModule;\n \n@@ -42,13 +43,13 @@ public void testLoadShardPath() throws IOException {\n Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, \"0xDEADBEEF\")\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n Settings settings = builder.build();\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", \"0xDEADBEEF\", 0);\n Path[] paths = env.availableShardPaths(shardId);\n Path path = randomFrom(paths);\n ShardStateMetaData.FORMAT.write(new ShardStateMetaData(2, true, \"0xDEADBEEF\", AllocationId.newInitializing()), 2, path);\n ShardPath shardPath = ShardPath.loadShardPath(logger, env, shardId, IndexSettingsModule.newIndexSettings(shardId.getIndex(), settings));\n assertEquals(path, shardPath.getDataPath());\n- assertEquals(\"0xDEADBEEF\", shardPath.getIndexUUID());\n+ assertEquals(\"0xDEADBEEF\", shardPath.getShardId().getIndex().getUUID());\n assertEquals(\"foo\", shardPath.getShardId().getIndexName());\n assertEquals(path.resolve(\"translog\"), shardPath.resolveTranslog());\n assertEquals(path.resolve(\"index\"), shardPath.resolveIndex());\n@@ -57,14 +58,15 @@ public void testLoadShardPath() throws IOException {\n \n public void testFailLoadShardPathOnMultiState() throws IOException {\n try (final NodeEnvironment env = newNodeEnvironment(settingsBuilder().build())) {\n- Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, \"0xDEADBEEF\")\n+ final String indexUUID = \"0xDEADBEEF\";\n+ Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, indexUUID)\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n Settings settings = builder.build();\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", indexUUID, 0);\n Path[] paths = env.availableShardPaths(shardId);\n assumeTrue(\"This test tests multi data.path but we only got one\", paths.length > 1);\n int id = randomIntBetween(1, 10);\n- ShardStateMetaData.FORMAT.write(new ShardStateMetaData(id, true, \"0xDEADBEEF\", AllocationId.newInitializing()), id, paths);\n+ ShardStateMetaData.FORMAT.write(new ShardStateMetaData(id, true, indexUUID, AllocationId.newInitializing()), id, paths);\n ShardPath.loadShardPath(logger, env, shardId, IndexSettingsModule.newIndexSettings(shardId.getIndex(), settings));\n fail(\"Expected IllegalStateException\");\n } catch (IllegalStateException e) {\n@@ -77,7 +79,7 @@ public void testFailLoadShardPathIndexUUIDMissmatch() throws IOException {\n Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, \"foobar\")\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n Settings settings = builder.build();\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", \"foobar\", 0);\n Path[] paths = env.availableShardPaths(shardId);\n Path path = randomFrom(paths);\n int id = randomIntBetween(1, 10);\n@@ -90,18 +92,20 @@ public void testFailLoadShardPathIndexUUIDMissmatch() throws IOException {\n }\n \n public void testIllegalCustomDataPath() {\n- final Path path = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Index index = new Index(\"foo\", \"foo\");\n+ final Path path = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n try {\n- new ShardPath(true, path, path, \"foo\", new ShardId(\"foo\", \"_na_\", 0));\n+ new ShardPath(true, path, path, new ShardId(index, 0));\n fail(\"Expected IllegalArgumentException\");\n } catch (IllegalArgumentException e) {\n assertThat(e.getMessage(), is(\"shard state path must be different to the data path when using custom data paths\"));\n }\n }\n \n public void testValidCtor() {\n- final Path path = createTempDir().resolve(\"foo\").resolve(\"0\");\n- ShardPath shardPath = new ShardPath(false, path, path, \"foo\", new ShardId(\"foo\", \"_na_\", 0));\n+ Index index = new Index(\"foo\", \"foo\");\n+ final Path path = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n+ ShardPath shardPath = new ShardPath(false, path, path, new ShardId(index, 0));\n assertFalse(shardPath.isCustomDataPath());\n assertEquals(shardPath.getDataPath(), path);\n assertEquals(shardPath.getShardStatePath(), path);\n@@ -111,8 +115,9 @@ public void testGetRootPaths() throws IOException {\n boolean useCustomDataPath = randomBoolean();\n final Settings indexSettings;\n final Settings nodeSettings;\n+ final String indexUUID = \"0xDEADBEEF\";\n Settings.Builder indexSettingsBuilder = settingsBuilder()\n- .put(IndexMetaData.SETTING_INDEX_UUID, \"0xDEADBEEF\")\n+ .put(IndexMetaData.SETTING_INDEX_UUID, indexUUID)\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n final Path customPath;\n if (useCustomDataPath) {\n@@ -132,10 +137,10 @@ public void testGetRootPaths() throws IOException {\n nodeSettings = Settings.EMPTY;\n }\n try (final NodeEnvironment env = newNodeEnvironment(nodeSettings)) {\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", indexUUID, 0);\n Path[] paths = env.availableShardPaths(shardId);\n Path path = randomFrom(paths);\n- ShardStateMetaData.FORMAT.write(new ShardStateMetaData(2, true, \"0xDEADBEEF\", AllocationId.newInitializing()), 2, path);\n+ ShardStateMetaData.FORMAT.write(new ShardStateMetaData(2, true, indexUUID, AllocationId.newInitializing()), 2, path);\n ShardPath shardPath = ShardPath.loadShardPath(logger, env, shardId, IndexSettingsModule.newIndexSettings(shardId.getIndex(), indexSettings));\n boolean found = false;\n for (Path p : env.nodeDataPaths()) {", "filename": "core/src/test/java/org/elasticsearch/index/shard/ShardPathTests.java", "status": "modified" }, { "diff": "@@ -49,6 +49,7 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.gateway.PrimaryShardAllocator;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.MergePolicyConfig;\n import org.elasticsearch.index.shard.IndexEventListener;\n@@ -571,8 +572,9 @@ private int numShards(String... index) {\n private Map<String, List<Path>> findFilesToCorruptForReplica() throws IOException {\n Map<String, List<Path>> filesToNodes = new HashMap<>();\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index test = state.metaData().index(\"test\").getIndex();\n for (ShardRouting shardRouting : state.getRoutingTable().allShards(\"test\")) {\n- if (shardRouting.primary() == true) {\n+ if (shardRouting.primary()) {\n continue;\n }\n assertTrue(shardRouting.assignedToNode());\n@@ -582,8 +584,7 @@ private Map<String, List<Path>> findFilesToCorruptForReplica() throws IOExceptio\n filesToNodes.put(nodeStats.getNode().getName(), files);\n for (FsInfo.Path info : nodeStats.getFs()) {\n String path = info.getPath();\n- final String relativeDataLocationPath = \"indices/test/\" + Integer.toString(shardRouting.getId()) + \"/index\";\n- Path file = PathUtils.get(path).resolve(relativeDataLocationPath);\n+ Path file = PathUtils.get(path).resolve(\"indices\").resolve(test.getUUID()).resolve(Integer.toString(shardRouting.getId())).resolve(\"index\");\n if (Files.exists(file)) { // multi data path might only have one path in use\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(file)) {\n for (Path item : stream) {\n@@ -604,6 +605,7 @@ private ShardRouting corruptRandomPrimaryFile() throws IOException {\n \n private ShardRouting corruptRandomPrimaryFile(final boolean includePerCommitFiles) throws IOException {\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index test = state.metaData().index(\"test\").getIndex();\n GroupShardsIterator shardIterators = state.getRoutingNodes().getRoutingTable().activePrimaryShardsGrouped(new String[]{\"test\"}, false);\n List<ShardIterator> iterators = iterableAsArrayList(shardIterators);\n ShardIterator shardIterator = RandomPicks.randomFrom(getRandom(), iterators);\n@@ -616,8 +618,7 @@ private ShardRouting corruptRandomPrimaryFile(final boolean includePerCommitFile\n Set<Path> files = new TreeSet<>(); // treeset makes sure iteration order is deterministic\n for (FsInfo.Path info : nodeStatses.getNodes()[0].getFs()) {\n String path = info.getPath();\n- final String relativeDataLocationPath = \"indices/test/\" + Integer.toString(shardRouting.getId()) + \"/index\";\n- Path file = PathUtils.get(path).resolve(relativeDataLocationPath);\n+ Path file = PathUtils.get(path).resolve(\"indices\").resolve(test.getUUID()).resolve(Integer.toString(shardRouting.getId())).resolve(\"index\");\n if (Files.exists(file)) { // multi data path might only have one path in use\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(file)) {\n for (Path item : stream) {\n@@ -676,12 +677,13 @@ private void pruneOldDeleteGenerations(Set<Path> files) {\n \n public List<Path> listShardFiles(ShardRouting routing) throws IOException {\n NodesStatsResponse nodeStatses = client().admin().cluster().prepareNodesStats(routing.currentNodeId()).setFs(true).get();\n-\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ final Index test = state.metaData().index(\"test\").getIndex();\n assertThat(routing.toString(), nodeStatses.getNodes().length, equalTo(1));\n List<Path> files = new ArrayList<>();\n for (FsInfo.Path info : nodeStatses.getNodes()[0].getFs()) {\n String path = info.getPath();\n- Path file = PathUtils.get(path).resolve(\"indices/test/\" + Integer.toString(routing.getId()) + \"/index\");\n+ Path file = PathUtils.get(path).resolve(\"indices/\" + test.getUUID() + \"/\" + Integer.toString(routing.getId()) + \"/index\");\n if (Files.exists(file)) { // multi data path might only have one path in use\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(file)) {\n for (Path item : stream) {", "filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedFileIT.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.MockEngineFactoryPlugin;\n import org.elasticsearch.monitor.fs.FsInfo;\n@@ -110,6 +111,7 @@ public void testCorruptTranslogFiles() throws Exception {\n private void corruptRandomTranslogFiles() throws IOException {\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n GroupShardsIterator shardIterators = state.getRoutingNodes().getRoutingTable().activePrimaryShardsGrouped(new String[]{\"test\"}, false);\n+ final Index test = state.metaData().index(\"test\").getIndex();\n List<ShardIterator> iterators = iterableAsArrayList(shardIterators);\n ShardIterator shardIterator = RandomPicks.randomFrom(getRandom(), iterators);\n ShardRouting shardRouting = shardIterator.nextOrNull();\n@@ -121,7 +123,7 @@ private void corruptRandomTranslogFiles() throws IOException {\n Set<Path> files = new TreeSet<>(); // treeset makes sure iteration order is deterministic\n for (FsInfo.Path fsPath : nodeStatses.getNodes()[0].getFs()) {\n String path = fsPath.getPath();\n- final String relativeDataLocationPath = \"indices/test/\" + Integer.toString(shardRouting.getId()) + \"/translog\";\n+ final String relativeDataLocationPath = \"indices/\"+ test.getUUID() +\"/\" + Integer.toString(shardRouting.getId()) + \"/translog\";\n Path file = PathUtils.get(path).resolve(relativeDataLocationPath);\n if (Files.exists(file)) {\n logger.info(\"--> path: {}\", file);", "filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedTranslogIT.java", "status": "modified" }, { "diff": "@@ -46,9 +46,9 @@ public void testHasSleepWrapperOnSharedFS() throws IOException {\n IndexSettings settings = IndexSettingsModule.newIndexSettings(\"foo\", build);\n IndexStoreConfig config = new IndexStoreConfig(build);\n IndexStore store = new IndexStore(settings, config);\n- Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Path tempDir = createTempDir().resolve(settings.getUUID()).resolve(\"0\");\n Files.createDirectories(tempDir);\n- ShardPath path = new ShardPath(false, tempDir, tempDir, settings.getUUID(), new ShardId(settings.getIndex(), 0));\n+ ShardPath path = new ShardPath(false, tempDir, tempDir, new ShardId(settings.getIndex(), 0));\n FsDirectoryService fsDirectoryService = new FsDirectoryService(settings, store, path);\n Directory directory = fsDirectoryService.newDirectory();\n assertTrue(directory instanceof RateLimitedFSDirectory);\n@@ -62,9 +62,9 @@ public void testHasNoSleepWrapperOnNormalFS() throws IOException {\n IndexSettings settings = IndexSettingsModule.newIndexSettings(\"foo\", build);\n IndexStoreConfig config = new IndexStoreConfig(build);\n IndexStore store = new IndexStore(settings, config);\n- Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Path tempDir = createTempDir().resolve(settings.getUUID()).resolve(\"0\");\n Files.createDirectories(tempDir);\n- ShardPath path = new ShardPath(false, tempDir, tempDir, settings.getUUID(), new ShardId(settings.getIndex(), 0));\n+ ShardPath path = new ShardPath(false, tempDir, tempDir, new ShardId(settings.getIndex(), 0));\n FsDirectoryService fsDirectoryService = new FsDirectoryService(settings, store, path);\n Directory directory = fsDirectoryService.newDirectory();\n assertTrue(directory instanceof RateLimitedFSDirectory);", "filename": "core/src/test/java/org/elasticsearch/index/store/FsDirectoryServiceTests.java", "status": "modified" }, { "diff": "@@ -47,13 +47,14 @@\n public class IndexStoreTests extends ESTestCase {\n \n public void testStoreDirectory() throws IOException {\n- final Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Index index = new Index(\"foo\", \"fooUUID\");\n+ final Path tempDir = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n final IndexModule.Type[] values = IndexModule.Type.values();\n final IndexModule.Type type = RandomPicks.randomFrom(random(), values);\n Settings settings = Settings.settingsBuilder().put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), type.name().toLowerCase(Locale.ROOT))\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n IndexSettings indexSettings = IndexSettingsModule.newIndexSettings(\"foo\", settings);\n- FsDirectoryService service = new FsDirectoryService(indexSettings, null, new ShardPath(false, tempDir, tempDir, \"foo\", new ShardId(\"foo\", \"_na_\", 0)));\n+ FsDirectoryService service = new FsDirectoryService(indexSettings, null, new ShardPath(false, tempDir, tempDir, new ShardId(index, 0)));\n try (final Directory directory = service.newFSDirectory(tempDir, NoLockFactory.INSTANCE)) {\n switch (type) {\n case NIOFS:\n@@ -84,8 +85,9 @@ public void testStoreDirectory() throws IOException {\n }\n \n public void testStoreDirectoryDefault() throws IOException {\n- final Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n- FsDirectoryService service = new FsDirectoryService(IndexSettingsModule.newIndexSettings(\"foo\", Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()), null, new ShardPath(false, tempDir, tempDir, \"foo\", new ShardId(\"foo\", \"_na_\", 0)));\n+ Index index = new Index(\"bar\", \"foo\");\n+ final Path tempDir = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n+ FsDirectoryService service = new FsDirectoryService(IndexSettingsModule.newIndexSettings(\"bar\", Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()), null, new ShardPath(false, tempDir, tempDir, new ShardId(index, 0)));\n try (final Directory directory = service.newFSDirectory(tempDir, NoLockFactory.INSTANCE)) {\n if (Constants.WINDOWS) {\n assertTrue(directory.toString(), directory instanceof MMapDirectory || directory instanceof SimpleFSDirectory);", "filename": "core/src/test/java/org/elasticsearch/index/store/IndexStoreTests.java", "status": "modified" }, { "diff": "@@ -112,12 +112,14 @@ public void testIndexCleanup() throws Exception {\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1))\n );\n ensureGreen(\"test\");\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index index = state.metaData().index(\"test\").getIndex();\n \n logger.info(\"--> making sure that shard and its replica are allocated on node_1 and node_2\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(true));\n \n logger.info(\"--> starting node server3\");\n final String node_3 = internalCluster().startNode(Settings.builder().put(Node.NODE_MASTER_SETTING.getKey(), false));\n@@ -128,12 +130,12 @@ public void testIndexCleanup() throws Exception {\n .get();\n assertThat(clusterHealth.isTimedOut(), equalTo(false));\n \n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_3, \"test\")), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_3, index)), equalTo(false));\n \n logger.info(\"--> move shard from node_1 to node_3, and wait for relocation to finish\");\n \n@@ -161,12 +163,12 @@ public void testIndexCleanup() throws Exception {\n .get();\n assertThat(clusterHealth.isTimedOut(), equalTo(false));\n \n- assertThat(waitForShardDeletion(node_1, \"test\", 0), equalTo(false));\n- assertThat(waitForIndexDeletion(node_1, \"test\"), equalTo(false));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_3, \"test\")), equalTo(true));\n+ assertThat(waitForShardDeletion(node_1, index, 0), equalTo(false));\n+ assertThat(waitForIndexDeletion(node_1, index), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_3, index)), equalTo(true));\n \n }\n \n@@ -180,16 +182,18 @@ public void testShardCleanupIfShardDeletionAfterRelocationFailedAndIndexDeleted(\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0))\n );\n ensureGreen(\"test\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index index = state.metaData().index(\"test\").getIndex();\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n \n final String node_2 = internalCluster().startDataOnlyNode(Settings.builder().build());\n assertFalse(client().admin().cluster().prepareHealth().setWaitForNodes(\"2\").get().isTimedOut());\n \n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(false));\n \n // add a transport delegate that will prevent the shard active request to succeed the first time after relocation has finished.\n // node_1 will then wait for the next cluster state change before it tries a next attempt to delete the shard.\n@@ -220,14 +224,14 @@ public void sendRequest(DiscoveryNode node, long requestId, String action, Trans\n // it must still delete the shard, even if it cannot find it anymore in indicesservice\n client().admin().indices().prepareDelete(\"test\").get();\n \n- assertThat(waitForShardDeletion(node_1, \"test\", 0), equalTo(false));\n- assertThat(waitForIndexDeletion(node_1, \"test\"), equalTo(false));\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(false));\n- assertThat(waitForShardDeletion(node_2, \"test\", 0), equalTo(false));\n- assertThat(waitForIndexDeletion(node_2, \"test\"), equalTo(false));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(false));\n+ assertThat(waitForShardDeletion(node_1, index, 0), equalTo(false));\n+ assertThat(waitForIndexDeletion(node_1, index), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(false));\n+ assertThat(waitForShardDeletion(node_2, index, 0), equalTo(false));\n+ assertThat(waitForIndexDeletion(node_2, index), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(false));\n }\n \n public void testShardsCleanup() throws Exception {\n@@ -241,9 +245,11 @@ public void testShardsCleanup() throws Exception {\n );\n ensureGreen(\"test\");\n \n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index index = state.metaData().index(\"test\").getIndex();\n logger.info(\"--> making sure that shard and its replica are allocated on node_1 and node_2\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n \n logger.info(\"--> starting node server3\");\n String node_3 = internalCluster().startNode();\n@@ -255,10 +261,10 @@ public void testShardsCleanup() throws Exception {\n assertThat(clusterHealth.isTimedOut(), equalTo(false));\n \n logger.info(\"--> making sure that shard is not allocated on server3\");\n- assertThat(waitForShardDeletion(node_3, \"test\", 0), equalTo(false));\n+ assertThat(waitForShardDeletion(node_3, index, 0), equalTo(false));\n \n- Path server2Shard = shardDirectory(node_2, \"test\", 0);\n- logger.info(\"--> stopping node {}\", node_2);\n+ Path server2Shard = shardDirectory(node_2, index, 0);\n+ logger.info(\"--> stopping node \" + node_2);\n internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_2));\n \n logger.info(\"--> running cluster_health\");\n@@ -273,9 +279,9 @@ public void testShardsCleanup() throws Exception {\n assertThat(Files.exists(server2Shard), equalTo(true));\n \n logger.info(\"--> making sure that shard and its replica exist on server1, server2 and server3\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n assertThat(Files.exists(server2Shard), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(true));\n \n logger.info(\"--> starting node node_4\");\n final String node_4 = internalCluster().startNode();\n@@ -284,9 +290,9 @@ public void testShardsCleanup() throws Exception {\n ensureGreen();\n \n logger.info(\"--> making sure that shard and its replica are allocated on server1 and server3 but not on server2\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(true));\n- assertThat(waitForShardDeletion(node_4, \"test\", 0), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(true));\n+ assertThat(waitForShardDeletion(node_4, index, 0), equalTo(false));\n }\n \n public void testShardActiveElsewhereDoesNotDeleteAnother() throws Exception {\n@@ -426,30 +432,30 @@ public void onFailure(String source, Throwable t) {\n waitNoPendingTasksOnAll();\n logger.info(\"Checking if shards aren't removed\");\n for (int shard : node2Shards) {\n- assertTrue(waitForShardDeletion(nonMasterNode, \"test\", shard));\n+ assertTrue(waitForShardDeletion(nonMasterNode, index, shard));\n }\n }\n \n- private Path indexDirectory(String server, String index) {\n+ private Path indexDirectory(String server, Index index) {\n NodeEnvironment env = internalCluster().getInstance(NodeEnvironment.class, server);\n final Path[] paths = env.indexPaths(index);\n assert paths.length == 1;\n return paths[0];\n }\n \n- private Path shardDirectory(String server, String index, int shard) {\n+ private Path shardDirectory(String server, Index index, int shard) {\n NodeEnvironment env = internalCluster().getInstance(NodeEnvironment.class, server);\n- final Path[] paths = env.availableShardPaths(new ShardId(index, \"_na_\", shard));\n+ final Path[] paths = env.availableShardPaths(new ShardId(index, shard));\n assert paths.length == 1;\n return paths[0];\n }\n \n- private boolean waitForShardDeletion(final String server, final String index, final int shard) throws InterruptedException {\n+ private boolean waitForShardDeletion(final String server, final Index index, final int shard) throws InterruptedException {\n awaitBusy(() -> !Files.exists(shardDirectory(server, index, shard)));\n return Files.exists(shardDirectory(server, index, shard));\n }\n \n- private boolean waitForIndexDeletion(final String server, final String index) throws InterruptedException {\n+ private boolean waitForIndexDeletion(final String server, final Index index) throws InterruptedException {\n awaitBusy(() -> !Files.exists(indexDirectory(server, index)));\n return Files.exists(indexDirectory(server, index));\n }", "filename": "core/src/test/java/org/elasticsearch/indices/store/IndicesStoreIntegrationIT.java", "status": "modified" }, { "diff": "@@ -4,6 +4,17 @@\n This section discusses the changes that you need to be aware of when migrating\n your application to Elasticsearch 5.0.\n \n+[float]\n+=== Indices created before 5.0\n+\n+Elasticsearch 5.0 can read indices created in version 2.0 and above. If any\n+of your indices were created before 2.0 you will need to upgrade to the\n+latest 2.x version of Elasticsearch first, in order to upgrade your indices or\n+to delete the old indices. Elasticsearch will not start in the presence of old\n+indices. To upgrade 2.x indices, first start a node which have access to all\n+the data folders and let it upgrade all the indices before starting up rest of\n+the cluster.\n+\n [IMPORTANT]\n .Reindex indices from Elasticseach 1.x or before\n =========================================", "filename": "docs/reference/migration/migrate_5_0.asciidoc", "status": "modified" }, { "diff": "@@ -52,6 +52,7 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n \n public void testUpgradeOldMapping() throws IOException, ExecutionException, InterruptedException {\n final String indexName = \"index-mapper-murmur3-2.0.0\";\n+ final String indexUUID = \"1VzJds59TTK7lRu17W0mcg\";\n InternalTestCluster.Async<String> master = internalCluster().startNodeAsync();\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");\n@@ -72,6 +73,7 @@ public void testUpgradeOldMapping() throws IOException, ExecutionException, Inte\n assertFalse(Files.exists(dataPath));\n Path src = unzipDataDir.resolve(indexName + \"/nodes/0/indices\");\n Files.move(src, dataPath);\n+ Files.move(dataPath.resolve(indexName), dataPath.resolve(indexUUID));\n \n master.get();\n // force reloading dangling indices with a cluster state republish", "filename": "plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapperUpgradeTests.java", "status": "modified" }, { "diff": "@@ -53,6 +53,7 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n \n public void testUpgradeOldMapping() throws IOException, ExecutionException, InterruptedException {\n final String indexName = \"index-mapper-size-2.0.0\";\n+ final String indexUUID = \"ENCw7sG0SWuTPcH60bHheg\";\n InternalTestCluster.Async<String> master = internalCluster().startNodeAsync();\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");\n@@ -73,6 +74,7 @@ public void testUpgradeOldMapping() throws IOException, ExecutionException, Inte\n assertFalse(Files.exists(dataPath));\n Path src = unzipDataDir.resolve(indexName + \"/nodes/0/indices\");\n Files.move(src, dataPath);\n+ Files.move(dataPath.resolve(indexName), dataPath.resolve(indexUUID));\n master.get();\n // force reloading dangling indices with a cluster state republish\n client().admin().cluster().prepareReroute().get();", "filename": "plugins/mapper-size/src/test/java/org/elasticsearch/index/mapper/size/SizeFieldMapperUpgradeTests.java", "status": "modified" } ] }